首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Hedonic price models are commonly used in the study of markets for various goods, most notably those for wine, art, and jewelry. These models were developed to estimate implicit prices of product attributes within a given product class, where in the case of some goods, such as wine, substantial product differentiation exists. To address this issue, recent research on wine prices employs local polynomial regression clustering (LPRC) for estimating regression models under class uncertainty. This study demonstrates that a superior empirical approach – estimation of a mixture model – is applicable to a hedonic model of wine prices, provided only that the dependent variable in the model is rescaled. The present study also catalogues several of the advantages over LPRC modeling of estimating mixture models.  相似文献   

2.
The problem of estimating the location of a mobile robot in an unstructured environment is discussed. This work extends earlier results in two important ways. First, the bias and variance of the estimation are analytically derived as functions of the angular error and distance between frames. Second, the uncertainty covariance matrix is derived and is compared to the first-order approximation previously used to estimate the result of compounding uncertain transformations to provide a framework in which the appropriateness of the first-order estimate can be formally studied. A simulation study, showing how the biases and expected distance between the estimate and true position of the robot vary as a function of measurement errors and different path plannings, is presented. Some possible improvements of the estimation method and future research topics are also given.  相似文献   

3.
The nonparametric estimation of the growth curve has been extensively studied in both stationary and some nonstationary particular situations. In this work, we consider the statistical problem of estimating the average growth curve for a fixed design model with nonstationary error process. The nonstationarity considered here is of a general form, and this article may be considered as an extension of previous results. The optimal bandwidth is shown to depend on the singularity of the autocovariance function of the error process along the diagonal. A Monte Carlo study is conducted in order to assess the influence of the number of subjects and the number of observations per subject on the estimation.  相似文献   

4.
The von Bertalanffy growth model is extended to incorporate explanatory variables. The generalized model includes the switched growth model and the seasonal growth model as special cases, and can also be used to assess the tagging effect on growth. Distribution-free and consistent estimating functions are constructed for estimation of growth parameters from tag-recapture data in which age at release is unknown. This generalizes the work of James (1991, Biometrics 47 1519–1530) who considered the classical model and allowed for individual variability in growth. A real dataset from barramundi ( Lates calcarifer ) is analysed to estimate the growth parameters and possible effect of tagging on growth.  相似文献   

5.
We introduce a new estimator of the conditional survival function given some subset of the covariate values under a proportional hazards regression. The new estimate does not require estimating the base-line cumulative hazard function. An estimate of the variance is given and is easy to compute, involving only those quantities that are routinely calculated in a Cox model analysis. The asymptotic normality of the new estimate is shown by using a central limit theorem for Kaplan–Meier integrals. We indicate the straightforward extension of the estimation procedure under models with multiplicative relative risks, including non-proportional hazards, and to stratified and frailty models. The estimator is applied to a gastric cancer study where it is of interest to predict patients' survival based only on measurements obtained before surgery, the time at which the most important prognostic variable, stage, becomes known.  相似文献   

6.
Summary.  We present a general method of adjustment for non-ignorable non-response in studies where one or more further attempts are made to contact initial non-responders. A logistic regression model relates the probability of response at each contact attempt to covariates and outcomes of interest. We assume that the effect of these covariates and outcomes on the probability of response is the same at all contact attempts. Knowledge of the number of contact attempts enables estimation of the model by using only information from the respondents and the number of non-responders. Three approaches for fitting the response models and estimating parameters of substantive interest and their standard errors are compared: a modified conditional likelihood method in which the fitted inverse probabilities of response are used in weighted analyses for the outcomes of interest, an EM procedure with the Louis formula and a Bayesian approach using Markov chain Monte Carlo methods. We further propose the creation of several sets of weights to incorporate uncertainty in the probability weights in subsequent analyses. Our methods are applied as a sensitivity analysis to a postal survey of symptoms in Persian Gulf War veterans and other servicemen.  相似文献   

7.
Estimating equations based on marginal generalized linear models are useful for regression modelling of correlated data, but inference and testing require reliable estimates of standard errors. We introduce a class of variance estimators based on the weighted empirical variance of the estimating functions and show that an adaptive choice of weights allows reliable estimation both asymptotically and by simulation in finite samples. Connections with previous bootstrap and jackknife methods are explored. The effect of reliable variance estimation is illustrated in data on health effects of air pollution in King County, Washington.  相似文献   

8.
Yang (1998) proposes a new class of scale estimators for the censored two sample accelerated life model. Unlike most previous results, Yang's estimators have the property that their asymptotic variance can be easily and reliably estimated, and the inferences on the scale parameter can be easily obtained. In this article, we further study the estimation function of Yang. Several new classes of weights and a new slope estimate are considered. Throughout extensive numerical studies, it was found that a new weight together with the use of the new slope results in an estimating function that improved the choices recommended by Yang.  相似文献   

9.
The major problem of mean–variance portfolio optimization is parameter uncertainty. Many methods have been proposed to tackle this problem, including shrinkage methods, resampling techniques, and imposing constraints on the portfolio weights, etc. This paper suggests a new estimation method for mean–variance portfolio weights based on the concept of generalized pivotal quantity (GPQ) in the case when asset returns are multivariate normally distributed and serially independent. Both point and interval estimations of the portfolio weights are considered. Comparing with Markowitz's mean–variance model, resampling and shrinkage methods, we find that the proposed GPQ method typically yields the smallest mean-squared error for the point estimate of the portfolio weights and obtains a satisfactory coverage rate for their simultaneous confidence intervals. Finally, we apply the proposed methodology to address a portfolio rebalancing problem.  相似文献   

10.
This paper describes an estimating function approach for parameter estimation in linear and nonlinear times series models with infinite variance stable errors. Joint estimates of location and scale parameters are derived for classes of autoregressive (AR) models and random coefficient autoregressive (RCA) models with stable errors, as well as for AR models with stable autoregressive conditionally heteroscedastic (ARCH) errors. Fast, on-line, recursive parametric estimation for the location parameter based on estimating functions is discussed using simulation studies. A real financial time series is also discussed in some detail.  相似文献   

11.
When combining estimates of a common parameter (of dimension d?1d?1) from independent data sets—as in stratified analyses and meta analyses—a weighted average, with weights ‘proportional’ to inverse variance matrices, is shown to have a minimal variance matrix (a standard fact when d=1d=1)—minimal in the sense that all convex combinations of the coordinates of the combined estimate have minimal variances. Minimum variance for the estimation of a single coordinate of the parameter can therefore be achieved by joint estimation of all coordinates using matrix weights. Moreover, if each estimate is asymptotically efficient within its own data set, then this optimally weighted average, with consistently estimated weights, is shown to be asymptotically efficient in the combined data set and avoids the need to merge the data sets and estimate the parameter in question afresh. This is so whatever additional non-common nuisance parameters may be in the models for the various data sets. A special case of this appeared in Fisher [1925. Theory of statistical estimation. Proc. Cambridge Philos. Soc. 22, 700–725.]: Optimal weights are ‘proportional’ to information matrices, and he argued that sample information should be used as weights rather than expected information, to maintain second-order efficiency of maximum likelihood. A number of special cases have appeared in the literature; we review several of them and give additional special cases, including stratified regression analysis—proportional-hazards, logistic or linear—, combination of independent ROC curves, and meta analysis. A test for homogeneity of the parameter across the data sets is also given.  相似文献   

12.
Abstract.  We develop a variance reduction method for smoothing splines. For a given point of estimation, we define a variance-reduced spline estimate as a linear combination of classical spline estimates at three nearby points. We first develop a variance reduction method for spline estimators in univariate regression models. We then develop an analogous variance reduction method for spline estimators in clustered/longitudinal models. Simulation studies are performed which demonstrate the efficacy of our variance reduction methods in finite sample settings. Finally, a real data analysis with the motorcycle data set is performed. Here we consider variance estimation and generate 95% pointwise confidence intervals for the unknown regression function.  相似文献   

13.
Longitudinal or clustered response data arise in many applications such as biostatistics, epidemiology and environmental studies. The repeated responses cannot in general be assumed to be independent. One method of analysing such data is by using the generalized estimating equations (GEE) approach. The current GEE method for estimating regression effects in longitudinal data focuses on the modelling of the working correlation matrix assuming a known variance function. However, correct choice of the correlation structure may not necessarily improve estimation efficiency for the regression parameters if the variance function is misspecified [Wang YG, Lin X. Effects of variance-function misspecification in analysis of longitudinal data. Biometrics. 2005;61:413–421]. In this connection two problems arise: finding a correct variance function and estimating the parameters of the chosen variance function. In this paper, we study the problem of estimating the parameters of the variance function assuming that the form of the variance function is known and then the effect of a misspecified variance function on the estimates of the regression parameters. We propose a GEE approach to estimate the parameters of the variance function. This estimation approach borrows the idea of Davidian and Carroll [Variance function estimation. J Amer Statist Assoc. 1987;82:1079–1091] by solving a nonlinear regression problem where residuals are regarded as the responses and the variance function is regarded as the regression function. A limited simulation study shows that the proposed method performs at least as well as the modified pseudo-likelihood approach developed by Wang and Zhao [A modified pseudolikelihood approach for analysis of longitudinal data. Biometrics. 2007;63:681–689]. Both these methods perform better than the GEE approach.  相似文献   

14.
For nonparametric regression models with fixed and random design, two classes of estimators for the error variance have been introduced: second sample moments based on residuals from a nonparametric fit, and difference-based estimators. The former are asymptotically optimal but require estimating the regression function; the latter are simple but have larger asymptotic variance. For nonparametric regression models with random covariates, we introduce a class of estimators for the error variance that are related to difference-based estimators: covariate-matched U-statistics. We give conditions on the random weights involved that lead to asymptotically optimal estimators of the error variance. Our explicit construction of the weights uses a kernel estimator for the covariate density.  相似文献   

15.
The growth curve model introduced by potthoff and Roy 1964 is a general statistical model which includes as special cases regression models and both univariate and multivariate analysis of variance models. The methods currently available for estimating the parameters of this model assume an underlying multivariate normal distribution of errors. In this paper, we discuss tw robst estimators of the growth curve loction and scatter parameters based upon M-estimation techniques and the work done by maronna 1976. The asymptotic distribution of these robust estimators are discussed and a numerical example given.  相似文献   

16.
This paper compares methods for modeling the probability of removal when variable amounts of removal effort are present. A hierarchical modeling framework can produce estimates of animal abundance and detection from replicated removal counts taken at different locations in a region of interest. A common method of specifying variation in detection probabilities across locations or replicates is with a logistic model that incorporates relevant detection covariates. As an alternative to this logistic model, we propose using a catch–effort (CE) model to account for heterogeneity in detection when a measure of removal effort is available for each removal count. This method models the probability of detection as a nonlinear function of removal effort and a removal probability parameter that can vary spatially. Simulation results demonstrate that the CE model can effectively estimate abundance and removal probabilities when average removal rates are large but both the CE and logistic models tend to produce biased estimates as average removal rates decrease. We also found that the CE model fits better than logistic models when estimating wild turkey abundance using harvest and hunter counts collected by the Minnesota Department of Natural Resources during the spring turkey hunting season.  相似文献   

17.
The balanced half-sample and jackknife variance estimation techniques are used to estimate the variance of the combined ratio estimate. An empirical sampling study is conducted using computer-generated populations to investigate the variance, bias and mean square error of these variance estimators and results are compared to theoretical results derived elsewhere for the linear case. Results indicate that either the balanced half-sample or jackknife method may be used effectively for estimating the variance of the combined ratio estimate.  相似文献   

18.
The estimation of the variance for the GREG (general regression) estimator by weighted residuals is widely accepted as a method which yields estimators with good conditional properties. Since the optimal (regression) estimator shares the properties of GREG estimators which are used in the construction of weighted variance estimators, we introduce the weighting procedure also for estimating the variance of the optimal estimator. This method of variance estimation was originally presented in a seemingly ad hoc manner, and we shall discuss it from a conditional point of view and also look at an alternative way of utilizing the weights. Examples that stress conditional behaviour of estimators are then given for elementary sampling designs such as simple random sampling, stratified simple random sampling and Poisson sampling, where for the latter design we have conducted a small simulation study.  相似文献   

19.
Observational studies are increasingly being used in medicine to estimate the effects of treatments or exposures on outcomes. To minimize the potential for confounding when estimating treatment effects, propensity score methods are frequently implemented. Often outcomes are the time to event. While it is common to report the treatment effect as a relative effect, such as the hazard ratio, reporting the effect using an absolute measure of effect is also important. One commonly used absolute measure of effect is the risk difference or difference in probability of the occurrence of an event within a specified duration of follow-up between a treatment and comparison group. We first describe methods for point and variance estimation of the risk difference when using weighting or matching based on the propensity score when outcomes are time-to-event. Next, we conducted Monte Carlo simulations to compare the relative performance of these methods with respect to bias of the point estimate, accuracy of variance estimates, and coverage of estimated confidence intervals. The results of the simulation generally support the use of weighting methods (untrimmed ATT weights and IPTW) or caliper matching when the prevalence of treatment is low for point estimation. For standard error estimation the simulation results support the use of weighted robust standard errors, bootstrap methods, or matching with a naïve standard error (i.e., Greenwood method). The methods considered in the article are illustrated using a real-world example in which we estimate the effect of discharge prescribing of statins on patients hospitalized for acute myocardial infarction.  相似文献   

20.
Abstract

Linear mixed effects models have been popular in small area estimation problems for modeling survey data when the sample size in one or more areas is too small for reliable inference. However, when the data are restricted to a bounded interval, the linear model may be inappropriate, particularly if the data are near the boundary. Nonlinear sampling models are becoming increasingly popular for small area estimation problems when the normal model is inadequate. This paper studies the use of a beta distribution as an alternative to the normal distribution as a sampling model for survey estimates of proportions which take values in (0, 1). Inference for small area proportions based on the posterior distribution of a beta regression model ensures that point estimates and credible intervals take values in (0, 1). Properties of a hierarchical Bayesian small area model with a beta sampling distribution and logistic link function are presented and compared to those of the linear mixed effect model. Propriety of the posterior distribution using certain noninformative priors is shown, and behavior of the posterior mean as a function of the sampling variance and the model variance is described. An example using 2010 Small Area Income and Poverty Estimates (SAIPE) data is given, and a numerical example studying small sample properties of the model is presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号