首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
The bootstrap, like the jackknife, is a technique for estimating standard errors. The idea is to use Monte Carlo simulation, based on a nonparametric estimate of the underlying error distribution. The bootstrap will be applied to an econometric model describing the demand for capital, labor, energy, and materials. The model is fitted by three-stage least squares. In sharp contrast with previous results, the coefficient estimates and the estimated standard errors perform very well. However, the model's forecasts show serious bias and large random errors, significantly understated by the conventional standard error of forecast.  相似文献   

2.
Amemiya's estimator is a weighted least squares estimator of the regression coefficients in a linear model with heteroscedastic errors. It is attractive because the heteroscedasticity is not parametrized and the weights (which depend on the error covariance matrix) are estimated nonparametrically. This paper derives an asymptotic expansion for Amemiya's form of the weighted least squares estimator, and uses it to discuss the effects of estimating the weights, of the number of iterations, and of the choice of the initial estimate. The paper also discusses the special case of normally distributed errors and clarifies the particular consequences of assuming normality.  相似文献   

3.
One common method for analyzing data in experimental designs when observations are missing was devised by Yates (1933), who developed his procedure based upon a suggestion by R. A. Fisher. Considering a linear model with independent, equi-variate errors, Yates substituted algebraic values for the missing data and then minimized the error sum of squares with respect to both the unknown parameters and the algebraic values. Yates showed that this procedure yielded the correct error sum of squares and a positively biased hypothesis sum of squares.

Others have elaborated on this technique. Chakrabarti (1962) gave a formal proof of Fisher's rule that produced a way to simplify the calculations of the auxiliary values to be used in place of the missing observations. Kshirsagar (1971) proved that the hypothesis sum of squares based on these values was biased, and developed an easy way to compute that bias. Sclove  相似文献   

4.
Large scale crop surveys can be made frequently and inex¬pensively during a crop growing season using Landsat data. A crop's at-harvest acreage in a stratum can be estimated from the crop's estimated at-harvest acreage in a small sample of the stratum's segments. The stratum estimate can utilize Landsat imagery obtained during the current crop grow¬ing season and in previous years. A mixed effects analysis of variance model is used to generate a weighted least squares es¬timate of the stratum at-harvest acreage proportion for the cur¬rent year. Similar Landsat based stratum crop proportion esti¬mates can be combined with historical information on non-sampled (or unsuccessfully sampled) strata to provide crop acreage estimates for large regions. These regional estimates of the at-harvest acreage can be determined early in the crop growing season, at different intermediate points, and at har¬vest time  相似文献   

5.
Partial least squares regression (PLS) is one method to estimate parameters in a linear model when predictor variables are nearly collinear. One way to characterize PLS is in terms of the scaling (shrinkage or expansion) along each eigenvector of the predictor correlation matrix. This characterization is useful in providing a link between PLS and other shrinkage estimators, such as principal components regression (PCR) and ridge regression (RR), thus facilitating a direct comparison of PLS with these methods. This paper gives a detailed analysis of the shrinkage structure of PLS, and several new results are presented regarding the nature and extent of shrinkage.  相似文献   

6.
Since the seminal paper by Cook (1977) in which he introduced Cook's distance, the identification of influential observations has received a great deal of interest and extensive investigation in linear regression. It is well documented that most of the popular diagnostic measures that are based on single-case deletion can mislead the analysis in the presence of multiple influential observations because of the well-known masking and/or swamping phenomena. Atkinson (1981) proposed a modification of Cook's distance. In this paper we propose a further modification of the Cook's distance for the identification of a single influential observation. We then propose new measures for the identification of multiple influential observations, which are not affected by the masking and swamping problems. The efficiency of the new statistics is presented through several well-known data sets and a simulation study.  相似文献   

7.
Estimation for the log-logistic and Weibull distributions can be performed by using the equations used for probability plotting, and this technique outperforms the maximum likelihood (ML) estimation often in small samples. This leads to a highly heteroskedastic regression problem. Exact expressions for the variances of the residuals are derived which can be used to perform weighted regression. In large samples, the ML performs best, but it is shown that in smaller samples, the weighted regression outperforms the ML estimation with respect to bias and mean square error.  相似文献   

8.
Which normal density curve best approximates the sample histogram? The answer suggested here is the normal curve that minimizes the integrated squared deviation between the histogram and the normal curve. A simple computational procedure is described to produce this best-fitting normal density. A few examples are presented to illustrate that this normal curve does indeed provide a visually satisfying fit, one that is better than the traditional , s answer. Some further aspects of this procedure are investigated. In particular it is shown that there is a satisfactory answer that is independent of the bar width of the histogram. It is also noted that this graphical procedure provides highly robust estimates of the sample mean and standard deviation. We demonstrate our technique by using data including Newcomb's data of passage time of light and Fisher's iris data.  相似文献   

9.
An early goal in autonomous navigation research is to build a research vehicle which can travel through office areas and factory floors, A simple strategy for directing the robot's movement in a hallway is to maintain a fixed distance from the wall. The problem is complicated by the fact that there are many factors in the environment, such as opened doors, pillars or other temporary objects, that can introduce 'noise' into the distance measure. To maintain a proper path with minimum interruption, the robot should have the ability to make decisions, based on measurements, and adjust its course only when it is deemed necessary. This report describes a new algorithm which enables the robot to move along and maintain a fixed distance from a reference object. The method, based on a robust estimator of the location, combines information from earlier measurements with current observations from range sensors to effectively produce an estimate of the distance between the robot and the object. A simulation study, showing the trajectories generated using this algorithm with different parameters for different environments, is presented.  相似文献   

10.
Estimation of a cross-sectional spatial model containing both a spatial lag of the dependent variable and spatially autoregressive disturbances are considered. [Kelejian and Prucha (1998)]described a generalized two-stage least squares procedure for estimating such a spatial model. Their estimator is, however, not asymptotically optimal. We propose best spatial 2SLS estimators that are asymptotically optimal instrumental variable (IV) estimators. An associated goodness-of-fit (or over identification) test is available. We suggest computationally simple and tractable numerical procedures for constructing the optimal instruments.  相似文献   

11.
Some modem approaches for the analysis of non-normally distributed and correlated data, including Liang and Zeger's ( 1986 ) method of generalized estimating equations (GEE), model the pattern of association among outcomes by assuming a structure for their correlation matrix. A number of relatively simple patterned correlation matrices are available for measurements with one level of correlation. However, modeling the correlation structure of data with multiple levels, or causes, of association is not as straightforward; this note discusses some of the difficulties and discusses a simple class of correlation models that may prove useful in this endeavor.  相似文献   

12.
Consider the linear regression model Y = Xθ+ ε where Y denotes a vector of n observations on the dependent variable, X is a known matrix, θ is a vector of parameters to be estimated and e is a random vector of uncorrelated errors. If X'X is nearly singular, that is if the smallest characteristic root of X'X s small then a small perurbation in the elements of X, such as due to measurement errors, induces considerable variation in the least squares estimate of θ. In this paper we examine for the asymptotic case when n is large the effect of perturbation with regard to the bias and mean squared error of the estimate.  相似文献   

13.
In testing product reliability, there is often a critical cutoff level that determines whether a specimen is classified as failed. One consequence is that the number of degradation data collected varies from specimen to specimen. The information of random sample size should be included in the model, and our study shows that it can be influential in estimating model parameters. Two-stage least squares (LS) and maximum modified likelihood (MML) estimation, which both assume fixed sample sizes, are commonly used for estimating parameters in the repeated measurements models typically applied to degradation data. However, the LS estimate is not consistent in the case of random sample sizes. This article derives the likelihood for the random sample size model and suggests using maximum likelihood (ML) for parameter estimation. Our simulation studies show that ML estimates have smaller biases and variances compared to the LS and MML estimates. All estimation methods can be greatly improved if the number of specimens increases from 5 to 10. A data set from a semiconductor application is used to illustrate our methods.  相似文献   

14.
市场化改革与区域经济非均衡增长的稳健关系检验   总被引:2,自引:0,他引:2  
通过建立内生经济增长模型,利用中国1999-2002年省际区域的面板数据,运用最小截平方和回归技术分析市场化改革对中国区域非均衡增长的作用。结果显示2000-2005年中国区域经济增长呈现出发散趋势,市场化改革对区域经济增长具有"稳健性"的显著关系,市场化进程与区域创新能力在很大程度上解释了中国的区域经济差距。  相似文献   

15.
The geometric characterization of linear regression in terms of the ‘concentration ellipse’ by Galton [Galton, F., 1886, Family likeness in stature (with Appendix by Dickson, J.D.H.). Proceedings of the Royal Society of London, 40, 42–73.] and Pearson [Pearson, K., 1901, On lines and planes of closest fit to systems of points in space. Philosophical Magazine, 2, 559–572.] was extended to the case of unequal variances of the presumably uncorrelated errors in the experimental data [McCartin, B.J., 2003, A geometric characterization of linear regression. Statistics, 37(2), 101–117.]. In this paper, this geometric characterization is further extended to planar (and also linear) regression in three dimensions where a beautiful interpretation in terms of the concentration ellipsoid is developed.  相似文献   

16.
This paper contains an application of the asymptotic expansion of a pFp() function to a problem encountered in econometrics. In particular we consider an approximation of the distribution function of the limited information maximum likelihood (LIML) identifiability test statistic using the method of moments. An expression for the Sth order asymptotic approximation of the moments of the LIML identifiability test statistic is derived and tabulated. The exact distribution function of the test statistic is approximated by a member of the class of F (variance ratio) distribution functions having the same first two integer moments. Some tabulations of the approximating distribution function are included.  相似文献   

17.
This paper investigates estimation of parameters in a combination of the multivariate linear model and growth curve model, called a generalized GMANOVA model. Making analogy between the outer product of data vectors and covariance yields an approach to directly do least squares to covariance. An outer product least squares estimator of covariance (COPLS estimator) is obtained and its distribution is presented if a normal assumption is imposed on the error matrix. Based on the COPLS estimator, two-stage generalized least squares estimators of the regression coefficients are derived. In addition, asymptotic normalities of these estimators are investigated. Simulation studies have shown that the COPLS estimator and two-stage GLS estimators are alternative competitors with more efficiency in the sense of sample mean, standard deviations and mean of the variance estimates to the existing ML estimator in finite samples. An example of application is also illustrated.  相似文献   

18.
In this article, maximum likelihood estimates of an exchangeable multinomial distribution using a parametric form to model the parameters as functions of covariates are derived. The non linearity of the exchangeable multinomial distribution and the parametric model make direct application of Newton Rahpson and Fisher's scoring algorithms computationally infeasible. Instead parameter estimates are obtained as solutions to an iterative weighted least-squares algorithm. A completely monotonic parametric form is proposed for defining the marginal probabilities that results in a valid probability model.  相似文献   

19.
ABSTRACT

A common method for estimating the time-domain parameters of an autoregressive process is to use the Yule–Walker equations. Tapering has been shown intuitively and proven theoretically to reduce the bias of the periodogram in the frequency domain, but the intuition for the similar bias reduction in the time-domain estimates has been lacking. We provide insightful reasoning for why tapering reduces the bias in the Yule–Walker estimates by showing them to be equivalent to a weighted least-squares problem. This leads to the derivation of an optimal taper which behaves similarly to commonly used tapers.  相似文献   

20.
A reconciliation is offered for the diverse test results on Friedman's permanent income hypothesis. A large data sample of those receiving windfall income in the Bureau of Labor Statistics' 1972–1973 Consumer Expenditure Survey is divided according to the size of the windfall relative to estimated permanent income. A pattern of a declining marginal propensity to consume windfall income as the relative size of the windfall increases is apparent. These results support the permanent income hypothesis for relatively large windfalls.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号