首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
比较了多种类型的核函数下倒向随机微分方程(BSDE)中生成元z的非参数估计方法,利用不同的核函数估计BSDE中的生成元z的非参数估计,在均方误差意义下比较了8种不同的核函数下得到的BSDE的生成元z的非参数估计的精度,统计分析结果显示Gaussian核函数下的估计效果最好。  相似文献   

2.
For a Gaussian stationary process with mean μ and autocovariance function γ(·), we consider to improve the usual sample autocovariances with respect to the mean squares error (MSE) loss. For the cases μ=0 and μ≠0, we propose sort of empirical Bayes type estimators Γ? and Γ?, respectively. Then their MSE improvements upon the usual sample autocovariances are evaluated in terms of the spectral density of the process. Concrete examples for them are provided. We observe that if the process is near to a unit root process the improvement becomes quite large. Thus, consideration for estimators of this type seems important in many fields, e.g., econometrics.  相似文献   

3.
Multiple comparison methods are widely implemented in statistical packages and heavily used. To obtain the critical value of a multiple comparison method for a given confidence level, a double integral equation must be solved. Current computer implementations evaluate one double integral for each candidate critical value using Gaussian quadrature. Consequently, iterative refinement of the critical value can slow the response time enough to hamper interactive data analysis. However, for balanced designs, to obtain the critical value for multiple comparisons with the best, subset selection, and one-sided multiple comparison with a control, if one regards the inner integral as a function of the outer integration variable, then this function can be obtained by discrete convolution using the Fast Fourier Transform (FFT). Exploiting the fact that this function need not be re-evaluated during iterative refinement of the critical value, it is shown that the FFT method obtains critical values at least four times as accurate and two to five times as fast as the Gaussian quadrature method.  相似文献   

4.
The penalized quasi-likelihood (PQL) approach is the most common estimation procedure for the generalized linear mixed model (GLMM). However, it has been noticed that the PQL tends to underestimate variance components as well as regression coefficients in the previous literature. In this article, we numerically show that the biases of variance component estimates by PQL are systematically related to the biases of regression coefficient estimates by PQL, and also show that the biases of variance component estimates by PQL increase as random effects become more heterogeneous.  相似文献   

5.
The presence of multicollinearity among the explanatory variables has undesirable effects on the maximum likelihood estimator (MLE). Ridge estimator (RE) is a widely used estimator in overcoming this issue. The RE enjoys the advantage that its mean squared error (MSE) is less than that of MLE. The inverse Gaussian regression (IGR) model is a well-known model in the application when the response variable positively skewed. The purpose of this paper is to derive the RE of the IGR under multicollinearity problem. In addition, the performance of this estimator is investigated under numerous methods for estimating the ridge parameter. Monte Carlo simulation results indicate that the suggested estimator performs better than the MLE estimator in terms of MSE. Furthermore, a real chemometrics dataset application is utilized and the results demonstrate the excellent performance of the suggested estimator when the multicollinearity is present in IGR model.  相似文献   

6.
Although generalized linear mixed models are recognized to be of major practical importance, it is also known that they can be computationally demanding. The problem is the evaluation of the integral in calculating the marginalized likelihood. The straightforward method is based on the Gauss–Hermite technique, based on Gaussian quadrature points. Another approach is provided by the class of penalized quasi-likelihood methods. It is commonly believed that the Gauss–Hermite method works relatively well in simple situations but fails in more complicated structures. However, we present here a strikingly simple example of a logistic random-intercepts model in the context of a longitudinal clinical trial where the method gives valid results only for a high number of quadrature points ( Q ). As a consequence, this result warns the practitioner to examine routinely the dependence of the results on Q . The adaptive Gaussian quadrature, as implemented in the new SAS procedure NLMIXED, offered the solution to our problem. However, even the adaptive version of Gaussian quadrature needs careful handling to ensure convergence.  相似文献   

7.
In the presence of multicollinearity, the rk class estimator is proposed as an alternative to the ordinary least squares (OLS) estimator which is a general estimator including the ordinary ridge regression (ORR), the principal components regression (PCR) and the OLS estimators. Comparison of competing estimators of a parameter in the sense of mean square error (MSE) criterion is of central interest. An alternative criterion to the MSE criterion is the Pitman’s (1937) closeness (PC) criterion. In this paper, we compare the rk class estimator to the OLS estimator in terms of PC criterion so that we can get the comparison of the ORR estimator to the OLS estimator under the PC criterion which was done by Mason et al. (1990) and also the comparison of the PCR estimator to the OLS estimator by means of the PC criterion which was done by Lin and Wei (2002).  相似文献   

8.
In this paper, we discuss the selection of random effects within the framework of generalized linear mixed models (GLMMs). Based on a reparametrization of the covariance matrix of random effects in terms of modified Cholesky decomposition, we propose to add a shrinkage penalty term to the penalized quasi-likelihood (PQL) function of the variance components for selecting effective random effects. The shrinkage penalty term is taken as a function of the variance of random effects, initiated by the fact that if the variance is zero then the corresponding variable is no longer random (with probability one). The proposed method takes the advantage of a convenient computation for the PQL estimation and appealing properties for certain shrinkage penalty functions such as LASSO and SCAD. We propose to use a backfitting algorithm to estimate the fixed effects and variance components in GLMMs, which also selects effective random effects simultaneously. Simulation studies show that the proposed approach performs quite well in selecting effective random effects in GLMMs. Real data analysis is made using the proposed approach, too.  相似文献   

9.
Summary.  The collection of data through surveys is a costly and time-consuming process, particularly when complex economic data are involved. The paper presents an efficient approach, based on Gaussian quadrature, to survey sampling when some information is available about the target population. Using household data from Mozambique, we demonstrate that Gaussian quadrature subsamples, based on relatively easy to observe household characteristics such as size and educational attainment of members, generate better estimates of the moments of household expenditure than random samples of equal size.  相似文献   

10.
This paper introduces a new approach, based on dependent univariate GLMs, for fitting multivariate mixture models. This approach is a multivariate generalization of the method for univariate mixtures presented by Hinde (1982). Its accuracy and efficiency are compared with direct maximization of the log-likelihood. Using a simulation study, we also compare the efficiency of Monte Carlo and Gaussian quadrature methods for approximating the mixture distribution. The new approach with Gaussian quadrature outperforms the alternative methods considered. The work is motivated by the multivariate mixture models which have been proposed for modelling changes of employment states at an individual level. Similar formulations are of interest for modelling movement between other social and economic states and multivariate mixture models also occur in biostatistics and epidemiology.  相似文献   

11.
In this article, the least squares (LS) estimates of the parameters of periodic autoregressive (PAR) models are investigated for various distributions of error terms via Monte-Carlo simulation. Beside the Gaussian distribution, this study covers the exponential, gamma, student-t, and Cauchy distributions. The estimates are compared for various distributions via bias and MSE criterion. The effect of other factors are also examined as the non-constancy of model orders, the non-constancy of the variances of seasonal white noise, the period length, and the length of the time series. The simulation results indicate that this method is in general robust for the estimation of AR parameters with respect to the distribution of error terms and other factors. However, the estimates of those parameters were, in some cases, noticeably poor for Cauchy distribution. It is also noticed that the variances of estimates of white noise variances are highly affected by the degree of skewness of the distribution of error terms.  相似文献   

12.
Problems with censored data arise quite frequently in reliability applications. Estimation of the reliability function is usually of concern. Reliability function estimators proposed by Kaplan and Meier (1958), Breslow (1972), are generally used when dealing with censored data. These estimators have the known properties of being asymptotically unbiased, uniformly strongly consistent, and weakly convergent to the same Gaussian process, when properly normalized. We study the properties of the smoothed Kaplan-Meier estimator with a suitable kernel function in this paper. The smooth estimator is compared with the Kaplan-Meier and Breslow estimators for large sample sizes giving an exact expression for an appropriately normalized difference of the mean square error (MSE) of the two estimators. This quantifies the deficiency of the Kaplan-Meier estimator in comparison to the smoothed version. We also obtain a non-asymptotic bound on an expected 1-type error under weak conditions. Some simulations are carried out to examine the performance of the suggested method.  相似文献   

13.
Two nonparametric estimators o f the survival distributionare discussed. The estimators were proposed by Kaplan and Meier (1958) and Breslow (1972) and are applicable when dealing with censored data. It is known that they are asymptotically unbiased and uniformly strongly consistent, and when properly normalized that they converge weakly to the same Gaussian process. In this paper, the properties of the estimators are carefully inspected in small or moderate samples. The Breslow estimator, a shrinkage version of the Kaplan-Meier, nearly always has the smaller mean square error (MSE) whenever the truesurvival probabilityis at least 0.20, but has considerably larger MSE than the Kaplan-Meier estimator when the survivalprobability is near zero.  相似文献   

14.
In this study, we demonstrate how generalized propensity score estimators (Imbens’ weighted estimator, the propensity score weighted estimator and the generalized doubly robust estimator) can be used to calculate the adjusted marginal probabilities for estimating the three common binomial parameters: the risk difference (RD), the relative risk (RR), and the odds ratio (OR). We further conduct a simulation study to compare the estimated RD, RR, and OR using the adjusted and the unadjusted marginal probabilities in terms of the bias and mean-squared error (MSE). Although there is no clear winner in terms of the MSE for estimating RD, RR, and OR, simulation results surprisingly show thatthe unadjusted marginal probabilities produce the smallest bias compared with the adjusted marginal probabilities in most of the estimates. Hence, in conclusion, we recommend using the unadjusted marginal probabilities to estimate RD, RR, and OR, in practice.  相似文献   

15.
Estimation in logistic-normal models for correlated and overdispersed binomial data is complicated by the numerical evaluation of often intractable likelihood functions. Penalized quasilikelihood (PQL) estimators of fixed effects and variance components are known to be seriously biased for binary data. A simple correction procedure has been proposed to improve the performance of the PQL estimators. The proposed method is illustrated by analyzing infectious disease data. Its performance is compared, by means of simulations, with that of the Bayes approach using the Gibbs sampler.  相似文献   

16.
We develop a pre-test type estimator of a deterministic parameter vector ββ in a linear Gaussian regression model. In contrast to conventional pre-test strategies, that do not dominate the least-squares (LS) method in terms of mean-squared error (MSE), our technique is shown to dominate LS when the effective dimension is greater than or equal to 4. Our estimator is based on a simple and intuitive approach in which we first determine the linear minimum MSE (MMSE) estimate that minimizes the MSE. Since the unknown vector ββ is deterministic, the MSE, and consequently the MMSE solution, will depend in general on ββ and therefore cannot be implemented. Instead, we propose applying the linear MMSE strategy with the LS substituted for the true value of ββ to obtain a new estimate. We then use the current estimate in conjunction with the linear MMSE solution to generate another estimate and continue iterating until convergence. As we show, the limit is a pre-test type method which is zero when the norm of the data is small, and is otherwise a non-linear shrinkage of LS.  相似文献   

17.
Abstract. Continuous proportional outcomes are collected from many practical studies, where responses are confined within the unit interval (0,1). Utilizing Barndorff‐Nielsen and Jørgensen's simplex distribution, we propose a new type of generalized linear mixed‐effects model for longitudinal proportional data, where the expected value of proportion is directly modelled through a logit function of fixed and random effects. We establish statistical inference along the lines of Breslow and Clayton's penalized quasi‐likelihood (PQL) and restricted maximum likelihood (REML) in the proposed model. We derive the PQL/REML using the high‐order multivariate Laplace approximation, which gives satisfactory estimation of the model parameters. The proposed model and inference are illustrated by simulation studies and a data example. The simulation studies conclude that the fourth order approximate PQL/REML performs satisfactorily. The data example shows that Aitchison's technique of the normal linear mixed model for logit‐transformed proportional outcomes is not robust against outliers.  相似文献   

18.
We consider two estimation schemes based on penalized quasilikelihood and quasi-pseudo-likelihood in Poisson mixed models. The asymptotic bias in regression coefficients and variance components estimated by penalized quasilikelihood (PQL) is studied for small values of the variance components. We show the PQL estimators of both regression coefficients and variance components in Poisson mixed models have a smaller order of bias compared to those for binomial data. Unbiased estimating equations based on quasi-pseudo-likelihood are proposed and are shown to yield consistent estimators under some regularity conditions. The finite sample performance of these two methods is compared through a simulation study.  相似文献   

19.
Recently in Dutt (1973, (1975), intgral representations over (0,A) were obtained for upper and lover multivariate normal and the probilities. It was pointed out that these integral representaitons when evaluated by Gauss-Hermite uadrature yield rapid and accurate numerical results.

Here integral representaitons, based on an integral formula due to Gurland (1948), are indicated for arbitrary multivariate probabilities. Application of this general representaion for computing multivariate x2 probabilities is discussed and numerical results using Gaussian quadrature are given for the bivariate and equicorre lated trivariate cases. Applications to the multivariate densities studied by Miller (1965) are also included  相似文献   

20.
Missing data methods, maximum likelihood estimation (MLE) and multiple imputation (MI), for longitudinal questionnaire data were investigated via simulation. Predictive mean matching (PMM) was applied at both item and scale levels, logistic regression at item level and multivariate normal imputation at scale level. We investigated a hybrid approach which is combination of MLE and MI, i.e. scales from the imputed data are eliminated if all underlying items were originally missing. Bias and mean square error (MSE) for parameter estimates were examined. ML seemed to provide occasionally the best results in terms of bias, but hardly ever on MSE. All imputation methods at the scale level and logistic regression at item level hardly ever showed the best performance. The hybrid approach is similar or better than its original MI. The PMM-hybrid approach at item level demonstrated the best MSE for most settings and in some cases also the smallest bias.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号