首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 804 毫秒
1.
A Bayesian model consists of two elements: a sampling model and a prior density. The problem of selecting a prior density is nothing but the problem of selecting a Bayesian model where the sampling model is fixed. A predictive approach is used through a decision problem where the loss function is the squared L 2 distance between the sampling density and the posterior predictive density, because the aim of the method is to choose the prior that provides a posterior predictive density as good as possible. An algorithm is developed for solving the problem; this algorithm is based on Lavine's linearization technique.  相似文献   

2.
The problem of estimating the location of a mobile robot in an unstructured environment is discussed. This work extends earlier results in two important ways. First, the bias and variance of the estimation are analytically derived as functions of the angular error and distance between frames. Second, the uncertainty covariance matrix is derived and is compared to the first-order approximation previously used to estimate the result of compounding uncertain transformations to provide a framework in which the appropriateness of the first-order estimate can be formally studied. A simulation study, showing how the biases and expected distance between the estimate and true position of the robot vary as a function of measurement errors and different path plannings, is presented. Some possible improvements of the estimation method and future research topics are also given.  相似文献   

3.
In this article, we consider the problem of variable selection in linear regression when multicollinearity is present in the data. It is well known that in the presence of multicollinearity, performance of least square (LS) estimator of regression parameters is not satisfactory. Consequently, subset selection methods, such as Mallow's Cp, which are based on LS estimates lead to selection of inadequate subsets. To overcome the problem of multicollinearity in subset selection, a new subset selection algorithm based on the ridge estimator is proposed. It is shown that the new algorithm is a better alternative to Mallow's Cp when the data exhibit multicollinearity.  相似文献   

4.
Mixed effect models, which contain both fixed effects and random effects, are frequently used in dealing with correlated data arising from repeated measurements (made on the same statistical units). In mixed effect models, the distributions of the random effects need to be specified and they are often assumed to be normal. The analysis of correlated data from repeated measurements can also be done with GEE by assuming any type of correlation as initial input. Both mixed effect models and GEE are approaches requiring distribution specifications (likelihood, score function). In this article, we consider a distribution-free least square approach under a general setting with missing value allowed. This approach does not require the specifications of the distributions and initial correlation input. Consistency and asymptotic normality of the estimation are discussed.  相似文献   

5.
We consider the problem of estimating the two parameters of the discrete Good distribution. We first show that the sufficient statistics for the parameters are the arithmetic and the geometric means. The maximum likelihood estimators (MLE's) of the parameters are obtained by solving numerically a system of equations involving the Lerch zeta function and the sufficient statistics. We find an expression for the asymptotic variance-covariance matrix of the MLE's, which can be evaluated numerically. We show that the probability mass function satisfies a simple recurrence equation linear in the two parameters, and propose the quadratic distance estimator (QDE) which can be computed with an ineratively reweighted least-squares algorithm. the QDE is easy to calculate and admits a simple expression for its asymptotic variance-covariance matrix. We compute this matrix for the MLE's and the QDE for various values of the parameters and see that the QDE has very high asymptotic efficiency. Finally, we present a numerical example.  相似文献   

6.
In a recent paper, Paparoditis [Scand. J. Statist. 27 (2000) 143] proposed a new goodness‐of‐fit test for time series models based on spectral density estimation. The test statistic is based on the distance between a kernel estimator of the ratio of the true and the hypothesized spectral density and the expected value of the estimator under the null and provides a quantification of how well the parametric density fits the sample spectral density. In this paper, we give a detailed asymptotic analysis of the corresponding procedure under fixed alternatives.  相似文献   

7.
Frame corrections have been studied in census applications for a long time. One very promising method is dual system estimation, which is based on capture–recapture models. These methods have been applied recently in the USA, England, Israel and Switzerland. In order to gain information on subgroups of the population, structure preserving estimators can be applied [i.e. structure preserving estimation (SPREE) and generalized SPREE]. The present paper extends the SPREE approach with an alternative distance function, the chi‐square. The new method has shown improved estimates in our application with very small domains. A comparative study based on a large‐scale Monte Carlo simulation elaborates on advantages and disadvantages of the estimators in the context of the German register‐assisted Census 2011.  相似文献   

8.
When the probability of selecting an individual in a population is propor­tional to its lifelength, it is called length biased sampling. A nonparametric maximum likelihood estimator (NPMLE) of survival in a length biased sam­ple is given in Vardi (1982). In this study, we examine the performance of Vardi's NPMLE in estimating the true survival curve when observations are from a length biased sample. We also compute estimators based on a linear combination (LCE) of empirical distribution function (EDF) estimators and weighted estimators. In our simulations, we consider observations from a mix­ture of two different distributions, one from F and the other from G which is a length biased distribution of F. Through a series of simulations with vari­ous proportions of length biasing in a sample, we show that the NPMLE and the LCE closely approximate the true survival curve. Throughout the sur­vival curve, the EDF estimators overestimate the survival. We also consider a case where the observations are from three different weighted distributions, Again, both the NPMLE and the LCE closely approximate the true distribu­tion, indicating that the length biasedness is properly adjusted for. Finally, an efficiency study shows that Vardi's estimators are more efficient than the EDF estimators in the lower percentiles of the survival curves.  相似文献   

9.
Huber's estimator has had a long lasting impact, particularly on robust statistics. It is well known that under certain conditions, Huber's estimator is asymptotically minimax. A moderate generalization in rederiving Huber's estimator shows that Huber's estimator is not the only choice. We develop an alternative asymptotic minimax estimator and name it regression with stochastically bounded noise (RSBN). Simulations demonstrate that RSBN is slightly better in performance, although it is unclear how to justify such an improvement theoretically. We propose two numerical solutions: an iterative numerical solution, which is extremely easy to implement and is based on the proximal point method; and a solution by applying state-of-the-art nonlinear optimization software packages, e.g., SNOPT. Contribution: the generalization of the variational approach is interesting and should be useful in deriving other asymptotic minimax estimators in other problems.  相似文献   

10.
A simple modification of a T-square sampling procedure for studying unmapped spatial distributions allows for the collection of several distance measurements at each randomly selected sampling location. A test of the null hypothesis of a completely random distribution of point items using these data is found to have power comparable to a related test based on T-square sampling if the number of items of data is held fixed, and to have greater power if the number of sampling locations is held fixed.  相似文献   

11.
An algorithm is presented for computing an exact nonparametric interval estimate of the slope parameter in a simple linear regression model. The confidence interval is obtained by inverting the hypothesis test for slope that uses Spearman's rho. This method is compared to an exact procedure based on Kendall's tau. The Spearman rho procedure will generally give exact levels of confidence closer to desired levels, especially in small samples. Monte carlo results comparing these two methods with the parametric procedure are given  相似文献   

12.
It is common for a linear regression model that the error terms display some form of heteroscedasticity and at the same time, the regressors are also linearly correlated. Both of these problems have serious impact on the ordinary least squares (OLS) estimates. In the presence of heteroscedasticity, the OLS estimator becomes inefficient and the similar adverse impact can also be found on the ridge regression estimator that is alternatively used to cope with the problem of multicollinearity. In the available literature, the adaptive estimator has been established to be more efficient than the OLS estimator when there is heteroscedasticity of unknown form. The present article proposes the similar adaptation for the ridge regression setting with an attempt to have more efficient estimator. Our numerical results, based on the Monte Carlo simulations, provide very attractive performance of the proposed estimator in terms of efficiency. Three different existing methods have been used for the selection of biasing parameter. Moreover, three different distributions of the error term have been studied to evaluate the proposed estimator and these are normal, Student's t and F distribution.  相似文献   

13.
The main purpose of this paper is to introduce first a new family of empirical test statistics for testing a simple null hypothesis when the vector of parameters of interest is defined through a specific set of unbiased estimating functions. This family of test statistics is based on a distance between two probability vectors, with the first probability vector obtained by maximizing the empirical likelihood (EL) on the vector of parameters, and the second vector defined from the fixed vector of parameters under the simple null hypothesis. The distance considered for this purpose is the phi-divergence measure. The asymptotic distribution is then derived for this family of test statistics. The proposed methodology is illustrated through the well-known data of Newcomb's measurements on the passage time for light. A simulation study is carried out to compare its performance with that of the EL ratio test when confidence intervals are constructed based on the respective statistics for small sample sizes. The results suggest that the ‘empirical modified likelihood ratio test statistic’ provides a competitive alternative to the EL ratio test statistic, and is also more robust than the EL ratio test statistic in the presence of contamination in the data. Finally, we propose empirical phi-divergence test statistics for testing a composite null hypothesis and present some asymptotic as well as simulation results for evaluating the performance of these test procedures.  相似文献   

14.
ABSTRACT

In this paper, we propose modified spline estimators for nonparametric regression models with right-censored data, especially when the censored response observations are converted to synthetic data. Efficient implementation of these estimators depends on the set of knot points and an appropriate smoothing parameter. We use three algorithms, the default selection method (DSM), myopic algorithm (MA), and full search algorithm (FSA), to select the optimum set of knots in a penalized spline method based on a smoothing parameter, which is chosen based on different criteria, including the improved version of the Akaike information criterion (AICc), generalized cross validation (GCV), restricted maximum likelihood (REML), and Bayesian information criterion (BIC). We also consider the smoothing spline (SS), which uses all the data points as knots. The main goal of this study is to compare the performance of the algorithm and criteria combinations in the suggested penalized spline fits under censored data. A Monte Carlo simulation study is performed and a real data example is presented to illustrate the ideas in the paper. The results confirm that the FSA slightly outperforms the other methods, especially for high censoring levels.  相似文献   

15.
Linear maps of a single unclassified observation are used to estimate the mixing proportion in a mixture of two populations with homogeneous variances in the presence of covariates. with complete knowledge of the parameters of the individual populations, the linear map for which the estimator is unbiased and has minimum variance amongst all similar estimators can be determined. Plug-in estimator based on independent training samples from the component populations can be constructed and is asymptotically equivalent to Cochran's classification statistic V* for covariate classification; see Memon and Okamoto (1970). Under normality assumptions, asymptotic expansion of the distribution of the plug-in estimator is available. In the absence of covariates, our estimator reduces to that suggested by Walker (1980) who has investigated the problem based on information on large unclassified samples from a mixture of two populations with heterogeneous variances. In contrast, distribution of Walker's estimator seems intractable in moderate sample sizes even with normality assumption.  相似文献   

16.
Generalised estimating equations (GEE) for regression problems with vector‐valued responses are examined. When the response vectors are of mixed type (e.g. continuous–binary response pairs), the GEE approach is a semiparametric alternative to full‐likelihood copula methods, and is closely related to Prentice & Zhao's mean‐covariance estimation equations approach. When the response vectors are of the same type (e.g. measurements on left and right eyes), the GEE approach can be viewed as a ‘plug‐in’ to existing methods, such as the vglm function from the state‐of‐the‐art VGAM package in R. In either scenario, the GEE approach offers asymptotically correct inferences on model parameters regardless of whether the working variance–covariance model is correctly or incorrectly specified. The finite‐sample performance of the method is assessed using simulation studies based on a burn injury dataset and a sorbinil eye trial dataset. The method is applied to data analysis examples using the same two datasets, as well as to a trivariate binary dataset on three plant species in the Hunua ranges of Auckland.  相似文献   

17.
In this paper, we propose a new generalized autoregressive conditional heteroskedastic (GARCH) model using infinite normal scale-mixtures which can suitably avoid order selection problems in the application of finite normal scale-mixtures. We discuss its theoretical properties and develop a two-stage algorithm for the maximum likelihood estimator to estimate the mixing distribution non-parametric maximum likelihood estimator (NPMLE) as well as GARCH parameters (two-stage MLE). For the estimation of a mixing distribution, we employ a fast computational algorithm proposed by Wang [On fast computation of the non-parametric maximum likelihood estimate of a mixing distribution. J R Stat Soc Ser B. 2007;69:185–198] under the gradient characterization of the non-parametric mixture likelihood. The GARCH parameters are then estimated either using the expectation-mazimization algorithm or general optimization scheme. In addition, we propose a new forecasting algorithm of value-at-risk (VaR) using the two-stage MLE and the NPMLE. Through a simulation study and real data analysis, we compare the performance of the two-stage MLE with the existing ones including quasi-maximum likelihood estimator based on the standard normal density and the finite normal mixture quasi maximum estimated-likelihood estimator (cf. Lee S, Lee T. Inference for Box–Cox transformed threshold GARCH models with nuisance parameters. Scand J Stat. 2012;39:568–589) in terms of the relative efficiency and accuracy of VaR forecasting.  相似文献   

18.
The following life-testing situation is considered. At some time in the distant past, n objects, from a population with life distribution F, were put in use; whenever an object failed, it was promptly replaced. At some time τ, long after the start of the process, a statistician starts observing the n objects in use at that time; he knows the age of each of those n objects, and observes each of them for a fixed length of time? ∞, or until failure, whichever occurs first. In the case where T is finite, some of the observations may be censored; in the case where T =∞, there is no censoring. The total life of an object in use at time ∞ is a length-biased observation from F. A nonparametric estimator of the (cumulative) hazard function is proposed, and is used to construct an estimator of F which is of the product-limit type. Strong uniform consistency results (for n → ∞) are obtained. An “Aalen-Johansen” identity, satisfied by any pair of life distributions and their (cumulative) hazard functions, is used in obtaining rate-of-convergence results.  相似文献   

19.
For a stratified population under inverse sampling, we propose and study an unbiased estimator for the mean of units belonging to a domain with specific features. An alternative, simpler, ratio-type estimator is also considered. Empirical studies show that strategies based on inverse sampling can be superior to a more traditional strategy based on stratified simple random sampling with a fixed number of draws in each stratum.  相似文献   

20.
As known, the least-squares estimator of the slope of a univariate linear model sets to zero the covariance between the regression residuals and the values of the explanatory variable. To prevent the estimation process from being influenced by outliers, which can be theoretically modelled by a heavy-tailed distribution for the error term, one can substitute covariance with some robust measures of association, for example Kendall's tau in the popular Theil–Sen estimator. In a scarcely known Italian paper, Cifarelli [(1978), ‘La Stima del Coefficiente di Regressione Mediante l'Indice di Cograduazione di Gini’, Rivista di matematica per le scienze economiche e sociali, 1, 7–38. A translation into English is available at http://arxiv.org/abs/1411.4809 and will appear in Decisions in Economics and Finance] shows that a gain of efficiency can be obtained by using Gini's cograduation index instead of Kendall's tau. This paper introduces a new estimator, derived from another association measure recently proposed. Such a measure is strongly related to Gini's cograduation index, as they are both built to vanish in the general framework of indifference. The newly proposed estimator is shown to be unbiased and asymptotically normally distributed. Moreover, all considered estimators are compared via their asymptotic relative efficiency and a small simulation study. Finally, some indications about the performance of the considered estimators in the presence of contaminated normal data are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号