首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
The bootstrap, like the jackknife, is a technique for estimating standard errors. The idea is to use Monte Carlo simulation, based on a nonparametric estimate of the underlying error distribution. The bootstrap will be applied to an econometric model describing the demand for capital, labor, energy, and materials. The model is fitted by three-stage least squares. In sharp contrast with previous results, the coefficient estimates and the estimated standard errors perform very well. However, the model's forecasts show serious bias and large random errors, significantly understated by the conventional standard error of forecast.  相似文献   

2.
We define a parametric proportional odds frailty model to describe lifetime data incorporating heterogeneity between individuals. An unobserved individual random effect, called frailty, acts multiplicatively on the odds of failure by time t. We investigate fitting by maximum likelihood and by least squares. For the latter, the parametric survivor function is fitted to the nonparametric Kaplan–Meier estimate at the observed failure times. Bootstrap standard errors and confidence intervals are obtained for the least squares estimates. The models are applied successfully to simulated data and to two real data sets. Least squares estimates appear to have smaller bias than maximum likelihood.  相似文献   

3.
The generalised least squares, maximum likelihood, Bain-Antle 1 and 2, and two mixed methods of estimating the parameters of the two-parameter Weibull distribution are compared. The comparison is made using (a) the observed relative efficiency of parameter estimates and (b) themean squared relative error in estimated quantiles, to summarize the results of 1000 simulated samples of sizes 10 and 25. The results are that: generalised least squares is the best method of estimating the shape parameter ß the best method of estimating the scale parameter a depends onthe size of ß for quantile estimation maximum likelihood is best Bain-Antle 2 is uniformly the worst of the methods.  相似文献   

4.
Four procedures are suggested for estimating the parameter ‘a’ in the Pauling equation:

e-X/a+e ? Y/a = 1.

The procedures are: using the mean of individual solutions, least squares with Y the subject of the equation, least squares with X the subject of the equation and maximum likelihood using a statistical model. In order to compare these estimates, we use Efron's bootstrap technique (1979), since distributional results are not available. This example also illustrates the role of the bootstrap in statistical inference.  相似文献   


5.
We conducted confirmatory factor analysis (CFA) of responses (N=803) to a self‐reported measure of optimism, using full‐information estimation via adaptive quadrature (AQ), an alternative estimation method for ordinal data. We evaluated AQ results in terms of the number of iterations required to achieve convergence, model fit, parameter estimates, standard errors (SE), and statistical significance, across four link‐functions (logit, probit, log‐log, complimentary log‐log) using 3–10 and 20 quadrature points. We compared AQ results with those obtained using maximum likelihood, robust maximum likelihood, and robust diagonally weighted least‐squares estimation. Compared to the other two link‐functions, logit and probit not only produced fit statistics, parameters estimates, SEs, and levels of significance that varied less across numbers of quadrature points, but also fitted the data better and provided larger completely standardised loadings than did maximum likelihood and diagonally weighted least‐squares. Our findings demonstrate the viability of using full‐information AQ to estimate CFA models with real‐world ordinal data.  相似文献   

6.
The estimation of parameters of the log normal distribution based on complete and censored samples are considered in the literature. In this article, the problem of estimating the parameters of log normal mixture model is considered. The Expectation Maximization algorithm is used to obtain maximum likelihood estimators for the parameters, as the likelihood equation does not yield closed form expression. The standard errors of the estimates are obtained. The methodology developed here is then illustrated through simulation studies. The confidence interval based on large-sample theory is obtained.  相似文献   

7.
Combining estimating functions for volatility   总被引:1,自引:0,他引:1  
Accurate estimates of volatility are needed in risk management. Generalized autoregressive conditional heteroscedastic (GARCH) models and random coefficient autoregressive (RCA) models have been used for volatility modelling. Following Heyde [1997. Quasi-likelihood and its Applications. Springer, New York], volatility estimates are obtained by combining two different estimating functions. It turns out that the combined estimating function for the parameter in autoregressive processes with GARCH errors and RCA models contains maximum information. The combination of the least squares (LS) estimating function and the least absolute deviation (LAD) estimating function with application to GARCH model error identification is discussed as an application.  相似文献   

8.
In split-plot experiments, estimation of unknown parameters by generalized least squares (GLS), as opposed to ordinary least squares (OLS), is required, owing to the existence of whole- and subplot errors. However, estimating the error variances is often necessary for GLS. Restricted maximum likelihood (REML) is an established method for estimating the error variances, and its benefits have been highlighted in many previous studies. This article proposes a new two-step residual-based approach for estimating error variances. Results of numerical simulations indicate that the proposed method performs sufficiently well to be considered as a suitable alternative to REML.  相似文献   

9.
ABSTRACT

The likelihood of a generalized linear mixed model (GLMM) often involves high-dimensional integrals, which in general cannot be computed explicitly. When direct computation is not available, method of simulated moments (MSM) is a fairly simple way to estimate the parameters of interest. In this research, we compared parametric bootstrap (PB) and nonparametric bootstrap methods (NPB) in estimating the standard errors of MSM estimators for GLMM. Simulation results show that when the group size is large, the PB and NPB perform similarly; when group size is medium, NPB performs better than PB in estimating standard errors of the mean.  相似文献   

10.
Abstract.  In forestry the problem of estimating areas is central. This paper addresses area estimation through fitting of a polygon to observed coordinate data. Coordinates of corners and points along the sides of a simple closed polygon are measured with independent random errors. This paper focuses on procedures to adjust the coordinates for estimation of the polygon and its area. Different new techniques that consider different amounts of prior information are described and compared. The different techniques use restricted least squares, maximum likelihood and the expectation maximization algorithm. In a simulation study it is shown that the root mean square errors of the estimates are decreased when coordinates are adjusted before estimation. Minor further improvement is achieved by using prior information about the order and the distribution of the points along the sides of the polygon. This paper has its origin in forestry but there are also other applications.  相似文献   

11.
A simulation study of the binomial-logit model with correlated random effects is carried out based on the generalized linear mixed model (GLMM) methodology. Simulated data with various numbers of regression parameters and different values of the variance component are considered. The performance of approximate maximum likelihood (ML) and residual maximum likelihood (REML) estimators is evaluated. For a range of true parameter values, we report the average biases of estimators, the standard error of the average bias and the standard error of estimates over the simulations. In general, in terms of bias, the two methods do not show significant differences in estimating regression parameters. The REML estimation method is slightly better in reducing the bias of variance component estimates.  相似文献   

12.
This paper presents a robust extension of factor analysis model by assuming the multivariate normal mean–variance mixture of Birnbaum–Saunders distribution for the unobservable factors and errors. A computationally analytical EM-based algorithm is developed to find maximum likelihood estimates of the parameters. The asymptotic standard errors of parameter estimates are derived under an information-based paradigm. Numerical merits of the proposed methodology are illustrated using both simulated and real datasets.  相似文献   

13.
In this paper, we propose a hidden Markov model for the analysis of the time series of bivariate circular observations, by assuming that the data are sampled from bivariate circular densities, whose parameters are driven by the evolution of a latent Markov chain. The model segments the data by accounting for redundancies due to correlations along time and across variables. A computationally feasible expectation maximization (EM) algorithm is provided for the maximum likelihood estimation of the model from incomplete data, by treating the missing values and the states of the latent chain as two different sources of incomplete information. Importance-sampling methods facilitate the computation of bootstrap standard errors of the estimates. The methodology is illustrated on a bivariate time series of wind and wave directions and compared with popular segmentation models for bivariate circular data, which ignore correlations across variables and/or along time.  相似文献   

14.
ABSTRACT

The purposes of this paper are to abstract from a number of articles variance component estimation procedures which can be used for completely random balanced incomplete block designs, to develop an iterated least squares (ITLS) computing algorithm for calculating maximum likelihood estimates, and to compare these procedures by use of simulated experiments. Based on the simulated experiments, the estimated mean square errors of the ITLS estimates are generally less than*those for previously proposed analysis of variance and symmetric sums estimators.  相似文献   

15.
A new method for estimating a set of odds ratios under an order restriction based on estimating equations is proposed. The method is applied to those of the conditional maximum likelihood estimators and the Mantel-Haenszel estimators. The estimators derived from the conditional likelihood estimating equations are shown to maximize the conditional likelihoods. It is also seen that the restricted estimators converge almost surely to the respective odds ratios when the respective sample sizes become large regularly. The restricted estimators are compared with the unrestricted maximum likelihood estimators by a Monte Carlo simulation. The simulation studies show that the restricted estimates improve the mean squared errors remarkably, while the Mantel-Haenszel type estimates are competitive with the conditional maximum likelihood estimates, being slightly worse.  相似文献   

16.
In the context of linear regression with dependent and nonstationary errors, the classical moving-block bootstrap (MBB) fails to capture the nonstationarity of the errors. A new bootstrap procedure called the blocking external bootstrap (BEB) is proposed to overcome the problem. The consistency of the BEB in estimating the variance of the least-squares estimator is studied in the case of α-mixing and nonstationary sequence of errors. It is shown that the BEB only achieves partial correction if the block size is fixed. Complete consistency is achieved by the BEB when the block size is allowed to go to infinity. We also study the first-order consistency of the least squares estimator based on the BEB. A simulation study is carried out to assess the performance of the BEB versus the MBB in estimating the variance of the least-squares estimator. Finally, some open problems are discussed.  相似文献   

17.
Maximum likelihood estimation of prevalence ratios using the log-binomial model is problematic when the estimates are on the boundary of the parameter space. When the model is correct, maximum likelihood is often the method of choice. The authors provide a theorem, formulas, and methodology for obtaining maximum likelihood estimators of the log-binomial model and their estimated standard errors when the solution is on the boundary of the parameter space. Examples are given to illustrate the method.  相似文献   

18.
In multiple regression and other settings one encounters the problem of estimating sampling distributions for contrast operations applied to i.i.d. errors. Permutation bootstrap applied to least squares residuals has been proven to consistently estimate conditionalsampling distributions of contrasts, conditional upon order statistics of errors, even for long-tailed error distributions. How does this compare with the unconditional sampling distribution of the contrast when standardizing by the sample s.d. of the errors (or the residuals)? For errors belonging to the domain of attraction of a normal we present a limit theorem proving that these distributions are far closer to one another than they are to the limiting standard normal distribution. For errors attracted to α-stable laws with α ≤ 2 we construct random variables possessing these conditional and unconditional sampling distributions and develop a Poisson representation for their a.s. limit correlation ρα. We prove that ρ2= 1, ρα→ 1 for α → 0 + or 2 ?, and ρα< 1 a.s. for α < 2.  相似文献   

19.
Parameter estimates of a new distribution for the strength of brittle fibers and composite materials are considered. An algorithm for generating random numbers from the distribution is suggested. Two parameter estimation methods, one based on a simple least squares procedure and the other based on the maximum likelihood principle, are studied using Monte Carlo simulation. In most cases, the maximum likelihood estimators were found to have somewhat smaller root mean squared error and bias than the least squares estimators. However, the least squares estimates are generally good and provide useful initial values for the numerical iteration used to find the maximum likelihood estimates.  相似文献   

20.
In a stated preference discrete choice experiment each subject is typically presented with several choice sets, and each choice set contains a number of alternatives. The alternatives are defined in terms of their name (brand) and their attributes at specified levels. The task for the subject is to choose from each choice set the alternative with highest utility for them. The multinomial is an appropriate distribution for the responses to each choice set since each subject chooses one alternative, and the multinomial logit is a common model. If the responses to the several choice sets are independent, the likelihood function is simply the product of multinomials. The most common and generally preferred method of estimating the parameters of the model is maximum likelihood (that is, selecting as estimates those values that maximize the likelihood function). If the assumption of within-subject independence to successive choice tasks is violated (it is almost surely violated), the likelihood function is incorrect and maximum likelihood estimation is inappropriate. The most serious errors involve the estimation of the variance-covariance matrix of the model parameter estimates, and the corresponding variances of market shares and changes in market shares.

In this paper we present an alternative method of estimation of the model parameter coefficients that incorporates a first-order within-subject covariance structure. The method involves the familiar log-odds transformation and application of the multivariate delta method. Estimation of the model coefficients after the transformation is a straightforward generalized least squares regression, and the corresponding improved estimate of the variance-covariance matrix is in closed form. Estimates of market share (and change in market share) follow from a second application of the multivariate delta method. The method and comparison with maximum likelihood estimation are illustrated with several simulated and actual data examples.

Advantages of the proposed method are: 1) it incorporates the within-subject covariance structure; 2) it is completely data driven; 3) it requires no additional model assumptions; 4) assuming asymptotic normality, it provides a simple procedure for computing confidence regions on market shares and changes in market shares; and 5) it produces results that are asymptotically equivalent to those produced by maximum likelihood when the data are independent.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号