首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
Two-phase stratified sampling is used to select subjects for the collection of additional data, e.g. validation data in measurement error problems. Stratification jointly by outcome and covariates, with sampling fractions chosen to achieve approximately equal numbers per stratum at the second phase of sampling, enhances efficiency compared with stratification based on the outcome or covariates alone. Nonparametric maximum likelihood may result in substantially more efficient estimates of logistic regression coefficients than weighted or pseudolikelihood procedures. Software to implement all three procedures is available. We demonstrate the practical importance of these design and analysis principles by an analysis of, and simulations based on, data from the US National Wilms Tumor Study.  相似文献   

2.
Case-base sampling provides an alternative to risk set sampling based methods to estimate hazard regression models, in particular when absolute hazards are also of interest in addition to hazard ratios. The case-base sampling approach results in a likelihood expression of the logistic regression form, but instead of categorized time, such an expression is obtained through sampling of a discrete set of person-time coordinates from all follow-up data. In this paper, in the context of a time-dependent exposure such as vaccination, and a potentially recurrent adverse event outcome, we show that the resulting partial likelihood for the outcome event intensity has the asymptotic properties of a likelihood. We contrast this approach to self-matched case-base sampling, which involves only within-individual comparisons. The efficiency of the case-base methods is compared to that of standard methods through simulations, suggesting that the information loss due to sampling is minimal.  相似文献   

3.
Many sampling problems from multiple populations can be considered under the semiparametric framework of the biased, or weighted, sampling model. Included under this framework is logistic regression under case–control sampling. For any model, atypical observations can greatly influence the maximum likelihood estimate of the parameters. Several robust alternatives have been proposed for the special case of logistic regression. However, some current techniques can exhibit poor behavior in many common situations. In this paper a new family of procedures are constructed to estimate the parameters in the semiparametric biased sampling model. The procedures incorporate a minimum distance approach, but are instead based on characteristic functions. The estimators can also be represented as the minimizers of quadratic forms in simple residuals, thus yielding straightforward computation. For the case of logistic regression, the resulting estimators are shown to be competitive with the existing robust approaches in terms of both robustness and efficiency, while maintaining affine equivariance. The approach is developed under the case–control sampling scheme, yet is shown to be applicable under prospective sampling logistic regression as well.  相似文献   

4.
A local likelihood method with constraints is developed for the case-control sample for estimating the unknown relative risk function and odds ratio. Our estimates can be reduced to simply solving two systems of estimating equations. One system of estimating equations is for estimating the relative risk function, and is identical to that based on the locally weighted logistic regression analysis under prospective sampling. Another system of estimating equations is for estimating the odds ratio, and is identical to that used in the traditional linear logistic regression analysis. Asymptotic properties of the estimators are presented. Two real examples and simulations are given to illustrate our method. The results confirm that our approach is useful for estimating the relative risk function and odds ratio in case-control studies.  相似文献   

5.

We consider nonparametric logistic regression and propose a generalized likelihood test for detecting a threshold effect that indicates a relationship between some risk factor and a defined outcome above the threshold but none below it. One important field of application is occupational medicine and in particular, epidemiological studies. In epidemiological studies, segmented fully parametric logistic regression models are often threshold models, where it is assumed that the exposure has no influence on a response up to a possible unknown threshold, and has an effect beyond that threshold. Finding efficient methods for detection and estimation of a threshold is a very important task in these studies. This article proposes such methods in a context of nonparametric logistic regression. We use a local version of unknown likelihood functions and show that under rather common assumptions the asymptotic power of our test is one. We present a guaranteed non asymptotic upper bound for the significance level of the proposed test. If applying the test yields the acceptance of the conclusion that there was a change point (and hence a threshold limit value), we suggest using the local maximum likelihood estimator of the change point and consider the asymptotic properties of this estimator.  相似文献   

6.
This article develops three empirical likelihood (EL) approaches to estimate parameters in nonlinear regression models in the presence of nonignorable missing responses. These are based on the inverse probability weighted (IPW) method, the augmented IPW (AIPW) method and the imputation technique. A logistic regression model is adopted to specify the propensity score. Maximum likelihood estimation is used to estimate parameters in the propensity score by combining the idea of importance sampling and imputing estimating equations. Under some regularity conditions, we obtain the asymptotic properties of the maximum EL estimators of these unknown parameters. Simulation studies are conducted to investigate the finite sample performance of our proposed estimation procedures. Empirical results provide evidence that the AIPW procedure exhibits better performance than the other two procedures. Data from a survey conducted in 2002 are used to illustrate the proposed estimation procedure. The Canadian Journal of Statistics 48: 386–416; 2020 © 2020 Statistical Society of Canada  相似文献   

7.
Semiparametric transformation models provide flexible regression models for survival analysis, including the Cox proportional hazards and the proportional odds models as special cases. We consider the application of semiparametric transformation models in case-cohort studies, where the covariate data are observed only on cases and on a subcohort randomly sampled from the full cohort. We first propose an approximate profile likelihood approach with full-cohort data, which amounts to the pseudo-partial likelihood approach of Zucker [2005. A pseudo-partial likelihood method for semiparametric survival regression with covariate errors. J. Amer. Statist. Assoc. 100, 1264–1277]. Simulation results show that our proposal is almost as efficient as the nonparametric maximum likelihood estimator. We then extend this approach to the case-cohort design, applying the Horvitz–Thompson weighting method to the estimating equations from the approximated profile likelihood. Two levels of weights can be utilized to achieve unbiasedness and to gain efficiency. The resulting estimator has a closed-form asymptotic covariance matrix, and is found in simulations to be substantially more efficient than the estimator based on martingale estimating equations. The extension to left-truncated data will be discussed. We illustrate the proposed method on data from a cardiovascular risk factor study conducted in Taiwan.  相似文献   

8.
In outcome‐dependent sampling, the continuous or binary outcome variable in a regression model is available in advance to guide selection of a sample on which explanatory variables are then measured. Selection probabilities may either be a smooth function of the outcome variable or be based on a stratification of the outcome. In many cases, only data from the final sample is accessible to the analyst. A maximum likelihood approach for this data configuration is developed here for the first time. The likelihood for fully general outcome‐dependent designs is stated, then the special case of Poisson sampling is examined in more detail. The maximum likelihood estimator differs from the well‐known maximum sample likelihood estimator, and an information bound result shows that the former is asymptotically more efficient. A simulation study suggests that the efficiency difference is generally small. Maximum sample likelihood estimation is therefore recommended in practice when only sample data is available. Some new smooth sample designs show considerable promise.  相似文献   

9.
Logistic regression is the most popular technique available for modeling dichotomous-dependent variables. It has intensive application in the field of social, medical, behavioral and public health sciences. In this paper we propose a more efficient logistic regression analysis based on moving extreme ranked set sampling (MERSSmin) scheme with ranking based on an easy-to-available auxiliary variable known to be associated with the variable of interest (response variable). The paper demonstrates that this approach will provide more powerful testing procedure as well as more efficient odds ratio and parameter estimation than using simple random sample (SRS). Theoretical derivation and simulation studies will be provided. Real data from 2011 Youth Risk Behavior Surveillance System (YRBSS) data are used to illustrate the procedures developed in this paper.  相似文献   

10.
Abstract. Family‐based case–control designs are commonly used in epidemiological studies for evaluating the role of genetic susceptibility and environmental exposure to risk factors in the etiology of rare diseases. Within this framework, it is often reasonable to assume genetic susceptibility and environmental exposure being conditionally independent of each other within families in the source population. We focus on this setting to explore the situation of measurement error affecting the assessment of the environmental exposure. We correct for measurement error through a likelihood‐based method. We exploit a conditional likelihood approach to relate the probability of disease to the genetic and the environmental risk factors. We show that this approach provides less biased and more efficient results than that based on logistic regression. Regression calibration, instead, provides severely biased estimators of the parameters. The comparison of the correction methods is performed through simulation, under common measurement error structures.  相似文献   

11.
Motivated by an application with complex survey data, we show that for logistic regression with a simple matched-pairs design, infinitely replicating observations and maximizing the conditional likelihood results in an estimator exactly identical to the unconditional maximum likelihood estimator based on the original sample, which is inconsistent. Therefore, applying conditional likelihood methods to a pseudosample with observations replicated a large number of times can lead to an inconsistent estimator; this casts doubt on one possible approach to conditional logistic regression with complex survey data. We speculate that for more general designs, an asymptotic equivalence holds.  相似文献   

12.
Case–control studies allow efficient estimation of the associations of covariates with a binary response in settings where the probability of a positive response is small. It is well known that covariate–response associations can be consistently estimated using a logistic model by acting as if the case–control (retrospective) data were prospective, and that this result does not hold for other binary regression models. However, in practice an investigator may be interested in fitting a non–logistic link binary regression model and this paper examines the magnitude of the bias resulting from ignoring the case–control sample design with such models. The paper presents an approximation to the magnitude of this bias in terms of the sampling rates of cases and controls, as well as simulation results that show that the bias can be substantial.  相似文献   

13.
Many two-phase sampling designs have been applied in practice to obtain efficient estimates of regression parameters while minimizing the cost of data collection. This research investigates two-phase sampling designs for so-called expensive variable problems, and compares them with one-phase designs. Closed form expressions for the asymptotic relative efficiency of maximum likelihood estimators from the two designs are derived for parametric normal models, providing insight into the available information for regression coefficients under the two designs. We further discuss when we should apply the two-phase design and how to choose the sample sizes for two-phase samples. Our numerical study indicates that the results can be applied to more general settings.  相似文献   

14.
We propose two retrospective test statistics for testing the vector of odds ratio parameters under the logistic regression model based on case–control data by exploiting the density ratio structure under a two-sample semiparametric model, which is equivalent to the assumed logistic regression model. The proposed test statistics are based on Kullback–Leibler entropy distance and are particularly relevant to the case–control sampling plan. These two test statistics have identical asymptotic chi-squared distributions under the null hypothesis and identical asymptotic noncentral chi-squared distributions under local alternatives to the null hypothesis. Moreover, the proposed test statistics require computation of the maximum semiparametric likelihood estimators of the underlying parameters, but are otherwise easily computed. We present some results on simulation and on the analysis of two real data sets.  相似文献   

15.
For linear regression models with non normally distributed errors, the least squares estimate (LSE) will lose some efficiency compared to the maximum likelihood estimate (MLE). In this article, we propose a kernel density-based regression estimate (KDRE) that is adaptive to the unknown error distribution. The key idea is to approximate the likelihood function by using a nonparametric kernel density estimate of the error density based on some initial parameter estimate. The proposed estimate is shown to be asymptotically as efficient as the oracle MLE which assumes the error density were known. In addition, we propose an EM type algorithm to maximize the estimated likelihood function and show that the KDRE can be considered as an iterated weighted least squares estimate, which provides us some insights on the adaptiveness of KDRE to the unknown error distribution. Our Monte Carlo simulation studies show that, while comparable to the traditional LSE for normal errors, the proposed estimation procedure can have substantial efficiency gain for non normal errors. Moreover, the efficiency gain can be achieved even for a small sample size.  相似文献   

16.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

17.
18.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

19.
Breslow and Holubkov (J Roy Stat Soc B 59:447–461 1997a) developed semiparametric maximum likelihood estimation for two-phase studies with a case–control first phase under a logistic regression model and noted that, apart for the overall intercept term, it was the same as the semiparametric estimator for two-phase studies with a prospective first phase developed in Scott and Wild (Biometrica 84:57–71 1997). In this paper we extend the Breslow–Holubkov result to general binary regression models and show that it has a very simple relationship with its prospective first-phase counterpart. We also explore why the design of the first phase only affects the intercept of a logistic model, simplify the calculation of standard errors, establish the semiparametric efficiency of the Breslow–Holubkov estimator and derive its asymptotic distribution in the general case.  相似文献   

20.
The properties of a method of estimating the ratio of parameters for ordered categorical response regression models are discussed. If the link function relating the response variable to the linear combination of covariates is unknown then it is only possible to estimate the ratio of regression parameters. This ratio of parameters has a substitutability or relative importance interpretation.

The maximum likelihood estimate of the ratio of parameters, assuming a logistic function (McCullagh, 1980), is found to have very small bias for a wide variety of true link functions. Further it is shown using Monte Carlo simulations that this maximum likelihood estimate, has good coverage properties, even if the link function is incorrectly specified. It is demonstrated that combining adjacent categories to make the response binary can result in an analysis which is appreciably less efficient. The size of the efficiency loss on, among other factors, the marginal distribution in the ordered categories  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号