首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 547 毫秒
1.
The linearization or Taylor series variance estimator and jackknife linearization variance estimator are popular for poststratified point estimators. In this note we propose a simple second-order linearization variance estimator for the poststratified estimator of the population total in two-stage sampling, using the second-order Taylor series expansion. We investigate the properties of the proposed variance estimator and its modified version and their empirical performance through some simulation studies in comparison to the standard and jackknife linearization variance estimators. Simulation studies are carried out on both artificially generated data and real data.  相似文献   

2.
3.
Ranked set sampling (RSS) is a cost-efficient technique for data collection when the units in a population can be easily judgment ranked by any cheap method other than actual measurements. Using auxiliary information in developing statistical procedures for inference about different population characteristics is a well-known approach. In this work, we deal with quantile estimation from a population with known mean when data are obtained according to RSS scheme. Through the simple device of mean-correction (subtract off the sample mean and add on the known population mean), a modified estimator is constructed from the standard quantile estimator. Asymptotic normality of the new estimator and its asymptotic efficiency relative to the original estimator are derived. Simulation results for several underlying distributions show that the proposed estimator is more efficient than the traditional one.  相似文献   

4.
In this paper, a new small domain estimator for area-level data is proposed. The proposed estimator is driven by a real problem of estimating the mean price of habitation transaction at a regional level in a European country, using data collected from a longitudinal survey conducted by a national statistical office. At the desired level of inference, it is not possible to provide accurate direct estimates because the sample sizes in these domains are very small. An area-level model with a heterogeneous covariance structure of random effects assists the proposed combined estimator. This model is an extension of a model due to Fay and Herriot [5], but it integrates information across domains and over several periods of time. In addition, a modified method of estimation of variance components for time-series and cross-sectional area-level models is proposed by including the design weights. A Monte Carlo simulation, based on real data, is conducted to investigate the performance of the proposed estimators in comparison with other estimators frequently used in small area estimation problems. In particular, we compare the performance of these estimators with the estimator based on the Rao–Yu model [23]. The simulation study also accesses the performance of the modified variance component estimators in comparison with the traditional ANOVA method. Simulation results show that the estimators proposed perform better than the other estimators in terms of both precision and bias.  相似文献   

5.
Analyses of randomised trials are often based on regression models which adjust for baseline covariates, in addition to randomised group. Based on such models, one can obtain estimates of the marginal mean outcome for the population under assignment to each treatment, by averaging the model‐based predictions across the empirical distribution of the baseline covariates in the trial. We identify under what conditions such estimates are consistent, and in particular show that for canonical generalised linear models, the resulting estimates are always consistent. We show that a recently proposed variance estimator underestimates the variance of the estimator around the true marginal population mean when the baseline covariates are not fixed in repeated sampling and provide a simple adjustment to remedy this. We also describe an alternative semiparametric estimator, which is consistent even when the outcome regression model used is misspecified. The different estimators are compared through simulations and application to a recently conducted trial in asthma.  相似文献   

6.
Summary.  In capture–recapture experiments the capture probabilities may depend on individual covariates such as an individual's weight or age. Typically this dependence is modelled through simple parametric functions of the covariates. Here we first demonstrate that misspecification of the model can produce biased estimates and subsequently develop a non-parametric procedure to estimate the functional relationship between the probability of capture and a single covariate. This estimator is then incorporated in a Horvitz–Thompson estimator to estimate the size of the population. The resulting estimators are evaluated in a simulation study and applied to a data set on captures of the Mountain Pygmy Possum.  相似文献   

7.
Researchers often report point estimates of turning point(s) obtained in polynomial regression models but rarely assess the precision of these estimates. We discuss three methods to assess the precision of such turning point estimates. The first is the delta method that leads to a normal approximation of the distribution of the turning point estimator. The second method uses the exact distribution of the turning point estimator of quadratic regression functions. The third method relies on Markov chain Monte Carlo methods to provide a finite sample approximation of the exact distribution of the turning point estimator. We argue that the delta method may lead to misleading inference and that the other two methods are more reliable. We compare the three methods using two data sets from the environmental Kuznets curve literature, where the presence and location of a turning point in the income-pollution relationship is the focus of much empirical work.  相似文献   

8.
Researchers often report point estimates of turning point(s) obtained in polynomial regression models but rarely assess the precision of these estimates. We discuss three methods to assess the precision of such turning point estimates. The first is the delta method that leads to a normal approximation of the distribution of the turning point estimator. The second method uses the exact distribution of the turning point estimator of quadratic regression functions. The third method relies on Markov chain Monte Carlo methods to provide a finite sample approximation of the exact distribution of the turning point estimator. We argue that the delta method may lead to misleading inference and that the other two methods are more reliable. We compare the three methods using two data sets from the environmental Kuznets curve literature, where the presence and location of a turning point in the income-pollution relationship is the focus of much empirical work.  相似文献   

9.
This paper describes the properties of a two-stage estimator of the dependence parameter in the Clayton-Oakes multivariate failure time model. The parameter is estimated from a likelihood function in which the marginal hazard functions are replaced by estimates. The method extends the approach of Shih and Louis (1995) and Genest, Ghoudi and Rivest (1995) to allow the marginal hazard for failure times to follow a stratified Cox (1972) model. The method is computationally simple and under mild regularity conditions produces a consistent, asymptotically normal estimator.  相似文献   

10.
Abstract.  We consider large sample inference in a semiparametric logistic/proportional-hazards mixture model. This model has been proposed to model survival data where there exists a positive portion of subjects in the population who are not susceptible to the event under consideration. Previous studies of the logistic/proportional-hazards mixture model have focused on developing point estimation procedures for the unknown parameters. This paper studies large sample inferences based on the semiparametric maximum likelihood estimator. Specifically, we establish existence, consistency and asymptotic normality results for the semiparametric maximum likelihood estimator. We also derive consistent variance estimates for both the parametric and non-parametric components. The results provide a theoretical foundation for making large sample inference under the logistic/proportional-hazards mixture model.  相似文献   

11.
This article develops a local partial likelihood technique to estimate the time-dependent coefficients in Cox's regression model. The basic idea is a simple extension of the local linear fitting technique used in the scatterplot smoothing. The coefficients are estimated locally based on the partial likelihood in a window around each time point. Multiple time-dependent covariates are incorporated in the local partial likelihood procedure. The procedure is useful as a diagnostic tool and can be used in uncovering time-dependencies or departure from the proportional hazards model. The programming involved in the local partial likelihood estimation is relatively simple and it can be modified with few efforts from the existing programs for the proportional hazards model. The asymptotic properties of the resulting estimator are established and compared with those from the local constant fitting. A consistent estimator of the asymptotic variance is also proposed. The approach is illustrated by a real data set from the study of gastric cancer patients and a simulation study is also presented.  相似文献   

12.
Heavy-tailed distributions have been used to model phenomena in which extreme events occur with high probability. In these type of occurrences, it is likely that extreme events are not observable after a certain threshold. Appropriate estimators are needed to deal with this type of censored data. We show that the well-known Hill-Hall estimator is unable to deal with censored data and yields highly biased estimates. We propose and study an unbiased modified maximum likelihood estimator, as well as a truncated tail regression estimator. We assess the expected value and the variance of these estimators in the cases of stable- and Pareto-distributed data.  相似文献   

13.
We propose a robust estimator in the errors-in-variables model using the least trimmed squares estimator. We call this estimator the orthogonal least trimmed squares (OLTS) estimator. We show that the OLTS estimator has the high breakdown point and appropriate equivariance properties. We develop an algorithm for the OLTS estimate. Simulations are performed to compare the efficiencies of the OLTS estimates with the total least squares (TLS) estimates and a numerical example is given to illustrate the effectiveness of the estimate.  相似文献   

14.
15.
We propose an improved class of exponential ratio type estimators for coefficient of variation (CV) of a finite population in simple and stratified random sampling using two auxiliary variables under two-phase sampling scheme. We examine the properties of the proposed estimators based on first order of approximation. The proposed class of estimators is more efficient than the usual sample CV estimator, ratio estimator, exponential ratio estimator, usual difference estimator and modified difference type estimator. We also use real data sets for numerical comparisons.  相似文献   

16.
It is suggested that in some situations, observations for random variables should be collected in the form of intervals. In this paper, the unknown parameters in a bivariate normal model are estimated based on a set of point and interval observations via the maximum likelihood approach. The Newton-Raphson algorithm is used to find the estimates, and asymptotic properties of the estimator are provided. Monte Carlo studies are conducted to study the performance of the estimator. An example based on real-life data is presented to demonstrate the practical applicability of the method.  相似文献   

17.
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model‐based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model‐based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
Data from a surveillance system can be used to estimate the size of a disease population. For certain surveillance systems, a binomial mixture model arises as a natural choice. The Chao estimator estimates a lower bound of the population size. The Zelterman estimator estimates a parameter that is neither a lower bound nor an upper bound. By comparing the Chao estimator and the Zelterman estimator both theoretically and numerically, we conclude that the Chao estimator is better.  相似文献   

19.
A Bayesian estimator based on Franklin's randomized response procedure is proposed for proportion estimation in surveys dealing with a sensitive character. The method is simple to implement and avoids the usual drawbacks of Franklin's estimator, i.e., the occurrence of negative estimates when the population proportion is small. A simulation study is considered in order to assess the performance of the proposed estimator as well as the corresponding credible interval.  相似文献   

20.
A simple least squares method for estimating a change in mean of a sequence of independent random variables is studied. The method first tests for a change in mean based on the regression principle of constrained and unconstrained sums of squares. Conditionally on a decision by this test that a change has occurred, least squares estimates are used to estimate the change point, the initial mean level (prior to the change point) and the change itself. The estimates of the initial level and change are functions of the change point estimate. All estimates are shown to be consistent, and those for the initial level and change are shown to be asymptotically jointly normal. The method performs well for moderately large shifts (one standard deviation or more), but the estimates of the initial level and change are biased in a predictable way for small shifts. The large sample theory is helpful in understanding this problem. The asymptotic distribution of the change point estimator is obtained for local shifts in mean, but the case of non-local shifts appears analytically intractable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号