首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Abstract.  We propose an easy to implement method for making small sample parametric inference about the root of an estimating equation expressible as a quadratic form in normal random variables. It is based on saddlepoint approximations to the distribution of the estimating equation whose unique root is a parameter's maximum likelihood estimator (MLE), while substituting conditional MLEs for the remaining (nuisance) parameters. Monotoncity of the estimating equation in its parameter argument enables us to relate these approximations to those for the estimator of interest. The proposed method is equivalent to a parametric bootstrap percentile approach where Monte Carlo simulation is replaced by saddlepoint approximation. It finds applications in many areas of statistics including, nonlinear regression, time series analysis, inference on ratios of regression parameters in linear models and calibration. We demonstrate the method in the context of some classical examples from nonlinear regression models and ratios of regression parameter problems. Simulation results for these show that the proposed method, apart from being generally easier to implement, yields confidence intervals with lengths and coverage probabilities that compare favourably with those obtained from several competing methods proposed in the literature over the past half-century.  相似文献   

2.
We develop a saddlepoint-based method for generating small sample confidence bands for the population surviival function from the Kaplan-Meier (KM), the product limit (PL), and Abdushukurov-Cheng-Lin (ACL) survival function estimators, under the proportional hazards model. In the process we derive the exact distribution of these estimators and developed mid-ppopulation tolerance bands for said estimators. Our saddlepoint method depends upon the Mellin transform of the zero-truncated survival estimator which we derive for the KM, PL, and ACL estimators. These transforms are inverted via saddlepoint approximations to yield highly accurate approximations to the cumulative distribution functions of the respective cumulative hazard function estimators and these distribution functions are then inverted to produce our saddlepoint confidence bands. For the KM, PL and ACL estimators we compare our saddlepoint confidence bands with those obtained from competing large sample methods as well as those obtained from the exact distribution. In our simulation studies we found that the saddlepoint confidence bands are very close to the confidence bands derived from the exact distribution, while being much easier to compute, and outperform the competing large sample methods in terms of coverage probability.  相似文献   

3.
In this paper, we investigate four existing and three new confidence interval estimators for the negative binomial proportion (i.e., proportion under inverse/negative binomial sampling). An extensive and systematic comparative study among these confidence interval estimators through Monte Carlo simulations is presented. The performance of these confidence intervals are evaluated in terms of their coverage probabilities and expected interval widths. Our simulation studies suggest that the confidence interval estimator based on saddlepoint approximation is more appealing for large coverage levels (e.g., nominal level≤1% ) whereas the score confidence interval estimator is more desirable for those commonly used coverage levels (e.g., nominal level>1% ). We illustrate these confidence interval construction methods with a real data set from a maternal congenital heart disease study.  相似文献   

4.
There is a tendency for the true variability of feasible GLS estimators to be understated by asymptotic standard errors. For estimation of SUR models, this tendency becomes more severe in large equation systems when estimation of the error covariance matrix, C, becomes problematic. We explore a number of potential solutions involving the use of improved estimators for the disturbance covariance matrix and bootstrapping. In particular, Ullah and Racine (1992) have recently introduced a new class of estimators for SUR models that use nonparametric kernel density estimation techniques. The proposed estimators have the same structure as the feasible GLS estimator of Zellner (1962) differing only in the choice of estimator for C. Ullah and Racine (1992) prove that their nonparametric density estimator of C can be expressed as Zellner's original estimator plus a positive definite matrix that depends on the smoothing parameter chosen for the density estimation. It is this structure of the estimator that most interests us as it has the potential to be especially useful in large equation systems.

Atkinson and Wilson (1992) investigated the bias in the conventional and bootstrap estimators of coefficient standard errors in SUR models. They demonstrated that under certain conditions the former were superior, but they caution that neither estimator uniformly dominated and hence bootstrapping provides little improvement in the estimation of standard errors for the regression coefficients. Rilstone and Veal1 (1996) argue that an important qualification needs to be made to this somewhat negative conclusion. They demonstrated that bootstrapping can result in improvements in inferences if the procedures are applied to the t-ratios rather than to the standard errors. These issues are explored for the case of large equation systems and when bootstrapping is combined with improved covariance estimation.  相似文献   

5.
This article investigates the application of depth estimators to crack growth models in construction engineering. Many crack growth models are based on the Paris–Erdogan equation which describes crack growth by a deterministic differential equation. By introducing a stochastic error term, crack growth can be modeled by a non-stationary autoregressive process with Lévy-type errors. A regression depth approach is presented to estimate the drift parameter of the process. We then prove the consistency of the estimator under quite general assumptions on the error distribution. By an extension of the depth notion to simplical depth it is possible to use a degenerated U-statistic and to establish tests for general hypotheses about the drift parameter. Since the statistic asymptotically has a transformed ${\chi_1^2}$ distribution, simple confidence intervals for the drift parameter can be obtained. In the second part, simulations of AR(1) processes with different error distributions are used to examine the quality of the constructed test. Finally we apply the presented method to crack growth experiments. We compare two datasets from independent experiments under different conditions but with the same material. We show that the parameter estimates differ significantly in this case.  相似文献   

6.
We derive saddlepoint approximations for the distribution and density functions of the half-life estimated by OLS from autoregressive time-series models. Our results are used to prove that none of the integer-order moments of these half-life estimators exist. This provides an explanation for the very large estimates of persistency, and the extremely wide confidence intervals, that have been reported by various authors, i.e., in the empirical economics literature relating to purchasing power parity.  相似文献   

7.
We study the maxiset performance of a large collection of block thresholding wavelet estimators, namely the horizontal block thresholding family. We provide sufficient conditions on the choices of rates and threshold values to ensure that the involved adaptive estimators obtain large maxisets. Moreover, we prove that any estimator of such a family reconstructs the Besov balls with a near‐minimax optimal rate that can be faster than the one of any separable thresholding estimator. Then, we identify, in particular cases, the best estimator of such a family, that is, the one associated with the largest maxiset. As a particularity of this paper, we propose a refined approach that models method‐dependent threshold values. By a series of simulation studies, we confirm the good performance of the best estimator by comparing it with the other members of its family.  相似文献   

8.
We apply the stochastic approximation method to construct a large class of recursive kernel estimators of a probability density, including the one introduced by Hall and Patil [1994. On the efficiency of on-line density estimators. IEEE Trans. Inform. Theory 40, 1504–1512]. We study the properties of these estimators and compare them with Rosenblatt's nonrecursive estimator. It turns out that, for pointwise estimation, it is preferable to use the nonrecursive Rosenblatt's kernel estimator rather than any recursive estimator. A contrario, for estimation by confidence intervals, it is better to use a recursive estimator rather than Rosenblatt's estimator.  相似文献   

9.
Negative binomial regression (NBR) and Poisson regression (PR) applications have become very popular in the analysis of count data in recent years. However, if there is a high degree of relationship between the independent variables, the problem of multicollinearity arises in these models. We introduce new two-parameter estimators (TPEs) for the NBR and the PR models by unifying the two-parameter estimator (TPE) of Özkale and Kaç?ranlar [The restricted and unrestricted two-parameter estimators. Commun Stat Theory Methods. 2007;36:2707–2725]. These new estimators are general estimators which include maximum likelihood (ML) estimator, ridge estimator (RE), Liu estimator (LE) and contraction estimator (CE) as special cases. Furthermore, biasing parameters of these estimators are given and a Monte Carlo simulation is done to evaluate the performance of these estimators using mean square error (MSE) criterion. The benefits of the new TPEs are also illustrated in an empirical application. The results show that the new proposed TPEs for the NBR and the PR models are better than the ML estimator, the RE and the LE.  相似文献   

10.
周先波  潘哲文 《统计研究》2015,32(5):97-105
本文给出第三类Tobit模型的一种新的半参数估计方法。在独立性假设下,利用主方程和选择方程中可观察受限因变量的条件生存函数所满足的关系式,构造第三类Tobit模型参数的一步联立估计量。在已知选择方程中参数一致性估计量的条件下,这种方法也可用于构造主方程模型参数 的两步估计量。本文证明了所提出的一步联立估计量和两步估计量的一致性和渐近正态性。实验模拟表明,我们提出的估计量在有限样本下具有良好表现,且一步联立估计量的有限样本表现优于或接近于Chen(1997)的估计量。  相似文献   

11.
This paper investigates the applications of capture–recapture methods to human populations. Capture–recapture methods are commonly used in estimating the size of wildlife populations but can also be used in epidemiology and social sciences, for estimating prevalence of a particular disease or the size of the homeless population in a certain area. Here we focus on estimating the prevalence of infectious diseases. Several estimators of population size are considered: the Lincoln–Petersen estimator and its modified version, the Chapman estimator, Chao’s lower bound estimator, the Zelterman’s estimator, McKendrick’s moment estimator and the maximum likelihood estimator. In order to evaluate these estimators, they are applied to real, three-source, capture-recapture data. By conditioning on each of the sources of three source data, we have been able to compare the estimators with the true value that they are estimating. The Chapman and Chao estimators were compared in terms of their relative bias. A variance formula derived through conditioning is suggested for Chao’s estimator, and normal 95% confidence intervals are calculated for this and the Chapman estimator. We then compare the coverage of the respective confidence intervals. Furthermore, a simulation study is included to compare Chao’s and Chapman’s estimator. Results indicate that Chao’s estimator is less biased than Chapman’s estimator unless both sources are independent. Chao’s estimator has also the smaller mean squared error. Finally, the implications and limitations of the above methods are discussed, with suggestions for further development. We are grateful to the Medical Research Council for supporting this work.  相似文献   

12.
Log-normal linear models are widely used in applications, and many times it is of interest to predict the response variable or to estimate the mean of the response variable at the original scale for a new set of covariate values. In this paper we consider the problem of efficient estimation of the conditional mean of the response variable at the original scale for log-normal linear models. Several existing estimators are reviewed first, including the maximum likelihood (ML) estimator, the restricted ML (REML) estimator, the uniformly minimum variance unbiased (UMVU) estimator, and a bias-corrected REML estimator. We then propose two estimators that minimize the asymptotic mean squared error and the asymptotic bias, respectively. A parametric bootstrap procedure is also described to obtain confidence intervals for the proposed estimators. Both the new estimators and the bootstrap procedure are very easy to implement. Comparisons of the estimators using simulation studies suggest that our estimators perform better than the existing ones, and the bootstrap procedure yields confidence intervals with good coverage properties. A real application of estimating the mean sediment discharge is used to illustrate the methodology.  相似文献   

13.
When familles have different numbers of offspring, Srivastava (1984) gave an alternative approach to deriving the maximum-likelihood estimators of inter- and intraclass correlations, which requires solving only one equation. Since the procedure is iterative and requires considerable computation, several alternative estimators have been proposed in the literature. In this paper, a comparison is made between the maximum-likelihood estimator and two alternative estimators proposed by Srivastava (1984). By obtaining the asymptotic normal distributions of the estimators, it is shown that one of the easily computable estimators is comparable to the maximum-likelihood estimator.  相似文献   

14.
We consider the first-order Poisson autoregressive model proposed by McKenzie [Some simple models for discrete variate time series. Water Resour Bull. 1985;21:645–650] and Al-Osh and Alzaid [First-order integer valued autoregressive (INAR(1)) process. J Time Ser Anal. 1987;8:261–275], which may be suitable in situations where the time series data are non-negative and integer valued. We derive the second-order bias of the squared difference estimator [Weiß. Process capability analysis for serially dependent processes of Poisson counts. J Stat Comput Simul. 2012;82:383–404] for one of the parameters and show that this bias can be used to define a bias-reduced estimator. The behaviour of a modified conditional least-squares estimator is also studied. Furthermore, we access the asymptotic properties of the estimators here discussed. We present numerical evidence, based upon Monte Carlo simulation studies, showing that the here proposed bias-adjusted estimator outperforms the other estimators in small samples. We also present an application to a real data set.  相似文献   

15.
We consider a partially linear model in which the vector of coefficients β in the linear part can be partitioned as ( β 1, β 2) , where β 1 is the coefficient vector for main effects (e.g. treatment effect, genetic effects) and β 2 is a vector for ‘nuisance’ effects (e.g. age, laboratory). In this situation, inference about β 1 may benefit from moving the least squares estimate for the full model in the direction of the least squares estimate without the nuisance variables (Steinian shrinkage), or from dropping the nuisance variables if there is evidence that they do not provide useful information (pretesting). We investigate the asymptotic properties of Stein‐type and pretest semiparametric estimators under quadratic loss and show that, under general conditions, a Stein‐type semiparametric estimator improves on the full model conventional semiparametric least squares estimator. The relative performance of the estimators is examined using asymptotic analysis of quadratic risk functions and it is found that the Stein‐type estimator outperforms the full model estimator uniformly. By contrast, the pretest estimator dominates the least squares estimator only in a small part of the parameter space, which is consistent with the theory. We also consider an absolute penalty‐type estimator for partially linear models and give a Monte Carlo simulation comparison of shrinkage, pretest and the absolute penalty‐type estimators. The comparison shows that the shrinkage method performs better than the absolute penalty‐type estimation method when the dimension of the β 2 parameter space is large.  相似文献   

16.
The inverse hypergeometric distribution is of interest in applications of inverse sampling without replacement from a finite population where a binary observation is made on each sampling unit. Thus, sampling is performed by randomly choosing units sequentially one at a time until a specified number of one of the two types is selected for the sample. Assuming the total number of units in the population is known but the number of each type is not, we consider the problem of estimating this parameter. We use the Delta method to develop approximations for the variance of three parameter estimators. We then propose three large sample confidence intervals for the parameter. Based on these results, we selected a sampling of parameter values for the inverse hypergeometric distribution to empirically investigate performance of these estimators. We evaluate their performance in terms of expected probability of parameter coverage and confidence interval length calculated as means of possible outcomes weighted by the appropriate outcome probabilities for each parameter value considered. The unbiased estimator of the parameter is the preferred estimator relative to the maximum likelihood estimator and an estimator based on a negative binomial approximation, as evidenced by empirical estimates of closeness to the true parameter value. Confidence intervals based on the unbiased estimator tend to be shorter than the two competitors because of its relatively small variance but at a slight cost in terms of coverage probability.  相似文献   

17.
In this note, we consider data subjected to middle censoring where the variable of interest becomes unobservable when it falls within an interval of censorship. We demonstrate that the nonparametric maximum likelihood estimator (NPMLE) of distribution function can be obtained by using Turnbull's (1976) EM algorithm or self-consistent estimating equation (Jammalamadaka and Mangalam, 2003) with an initial estimator which puts mass only on the innermost intervals. The consistency of the NPMLE can be established based on the asymptotic properties of self-consistent estimators (SCE) with mixed interval-censored data ( [Yu et al., 2000] and [Yu et al., 2001]).  相似文献   

18.
We propose correcting for non-compliance in randomized trials by estimating the parameters of a class of semi-parametric failure time models, the rank preserving structural failure time models, using a class of rank estimators. These models are the structural or strong version of the “accelerated failure time model with time-dependent covariates” of Cox and Oakes (1984). In this paper we develop a large sample theory for these estimators, derive the optimal estimator within this class, and briefly consider the construction of “partially adaptive” estimators whose efficiency may approach that of the optimal estimator. We show that in the absence of censoring the optimal estimator attains the semiparametric efficiency bound for the model.  相似文献   

19.
Abstract. In this paper, two non‐parametric estimators are proposed for estimating the components of an additive quantile regression model. The first estimator is a computationally convenient approach which can be viewed as a more viable alternative to existing kernel‐based approaches. The second estimator involves sequential fitting by univariate local polynomial quantile regressions for each additive component with the other additive components replaced by the corresponding estimates from the first estimator. The purpose of the extra local averaging is to reduce the variance of the first estimator. We show that the second estimator achieves oracle efficiency in the sense that each estimated additive component has the same variance as in the case when all other additive components were known. Asymptotic properties are derived for both estimators under dependent processes that are strictly stationary and absolutely regular. We also provide a demonstrative empirical application of additive quantile models to ambulance travel times.  相似文献   

20.
This paper is concerned with model selection and model averaging procedures for partially linear single-index models. The profile least squares procedure is employed to estimate regression coefficients for the full model and submodels. We show that the estimators for submodels are asymptotically normal. Based on the asymptotic distribution of the estimators, we derive the focused information criterion (FIC), formulate the frequentist model average (FMA) estimators and construct proper confidence intervals for FMA estimators and FIC estimator, a special case of FMA estimators. Monte Carlo studies are performed to demonstrate the superiority of the proposed method over the full model, and over models chosen by AIC or BIC in terms of coverage probability and mean squared error. Our approach is further applied to real data from a male fertility study to explore potential factors related to sperm concentration and estimate the relationship between sperm concentration and monobutyl phthalate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号