首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nonparametric density estimation in the presence of measurement error is considered. The usual kernel deconvolution estimator seeks to account for the contamination in the data by employing a modified kernel. In this paper a new approach based on a weighted kernel density estimator is proposed. Theoretical motivation is provided by the existence of a weight vector that perfectly counteracts the bias in density estimation without generating an excessive increase in variance. In practice a data driven method of weight selection is required. Our strategy is to minimize the discrepancy between a standard kernel estimate from the contaminated data on the one hand, and the convolution of the weighted deconvolution estimate with the measurement error density on the other hand. We consider a direct implementation of this approach, in which the weights are optimized subject to sum and non-negativity constraints, and a regularized version in which the objective function includes a ridge-type penalty. Numerical tests suggest that the weighted kernel estimation can lead to tangible improvements in performance over the usual kernel deconvolution estimator. Furthermore, weighted kernel estimates are free from the problem of negative estimation in the tails that can occur when using modified kernels. The weighted kernel approach generalizes to the case of multivariate deconvolution density estimation in a very straightforward manner.  相似文献   

2.
This article compares the properties of two balanced randomization schemes with several treatments under non-uniform allocation probabilities. According to the first procedure, the so-called truncated multinomial randomization design, the process employs a given allocation distribution, until a treatment receives its quota of subjects, after which this distribution switches to the conditional distribution for the remaining treatments, and so on. The second scheme, the random allocation rule, selects at random any legitimate assignment of the given number of subjects per treatment. The behavior of these two schemes is shown to be quite different: the truncated multinomial randomization design's assignment probabilities to a treatment turn out to vary over the recruitment period, and its accidental bias can be large, whereas the random allocation rule's this bias is bounded. The limiting distributions of the instants at which a treatment receives the given number of subjects is shown to be that of weighted spacings for normal order statistics with different variances. Formulas for the selection bias of both procedures are also derived.  相似文献   

3.
Demonstrated equivalence between a categorical regression model based on case‐control data and an I‐sample semiparametric selection bias model leads to a new goodness‐of‐fit test. The proposed test statistic is an extension of an existing Kolmogorov–Smirnov‐type statistic and is the weighted average of the absolute differences between two estimated distribution functions in each response category. The paper establishes an optimal property for the maximum semiparametric likelihood estimator of the parameters in the I‐sample semiparametric selection bias model. It also presents a bootstrap procedure, some simulation results and an analysis of two real datasets.  相似文献   

4.
We consider the case of a multicenter trial in which the center specific sample sizes are potentially small. Under homogeneity, the conventional procedure is to pool information using a weighted estimator where the weights used are inverse estimated center-specific variances. Whereas this procedure is efficient for conventional asymptotics (e. g. center-specific sample sizes become large, number of center fixed), it is commonly believed that the efficiency of this estimator holds true also for meta-analytic asymptotics (e.g. center-specific sample size bounded, potentially small, and number of centers large). In this contribution we demonstrate that this estimator fails to be efficient. In fact, it shows a persistent bias with increasing number of centers showing that it isnot meta-consistent. In addition, we show that the Cochran and Mantel-Haenszel weighted estimators are meta-consistent and, in more generality, provide conditions on the weights such that the associated weighted estimator is meta-consistent.  相似文献   

5.
Summary The need to evaluate the performance of active labour market policies is not questioned any longer. Even though OECD countries spend significant shares of national resources on these measures, unemployment rates remain high or even increase. We focus on microeconometric evaluation which has to solve the fundamental evaluation problem and overcome the possible occurrence of selection bias. When using non-experimental data, different evaluation approaches can be thought of. The aim of this paper is to review the most relevant estimators, discuss their identifying assumptions and their (dis-)advantages. Thereby we will present estimators based on some form of exogeneity (selection on observables) as well as estimators where selection might also occur on unobservable characteristics. Since the possible occurrence of effect heterogeneity has become a major topic in evaluation research in recent years, we will also assess the ability of each estimator to deal with it. Additionally, we will also discuss some recent extensions of the static evaluation framework to allow for dynamic treatment evaluation. The authors thank Stephan L. Thomsen, Christopher Zeiss and one anonymous referee for valuable comments. The usual disclaimer applies.  相似文献   

6.
The weighted least squares (WLS) estimator is often employed in linear regression using complex survey data to deal with the bias in ordinary least squares (OLS) arising from informative sampling. In this paper a 'quasi-Aitken WLS' (QWLS) estimator is proposed. QWLS modifies WLS in the same way that Cragg's quasi-Aitken estimator modifies OLS. It weights by the usual inverse sample inclusion probability weights multiplied by a parameterized function of covariates, where the parameters are chosen to minimize a variance criterion. The resulting estimator is consistent for the superpopulation regression coefficient under fairly mild conditions and has a smaller asymptotic variance than WLS.  相似文献   

7.
The authors study the estimation of domain totals and means under survey‐weighted regression imputation for missing items. They use two different approaches to inference: (i) design‐based with uniform response within classes; (ii) model‐assisted with ignorable response and an imputation model. They show that the imputed domain estimators are biased under (i) but approximately unbiased under (ii). They obtain a bias‐adjusted estimator that is approximately unbiased under (i) or (ii). They also derive linearization variance estimators. They report the results of a simulation study on the bias ratio and efficiency of alternative estimators, including a complete case estimator that requires the knowledge of response indicators.  相似文献   

8.
The randomization design used to collect the data provides basis for the exact distributions of the permutation tests. The truncated binomial design is one of the commonly used designs for forcing balance in clinical trials to eliminate experimental bias. In this article, we consider the exact distribution of the weighted log-rank class of tests for censored data under the truncated binomial design. A double saddlepoint approximation for p-values of this class is derived under the truncated binomial design. The speed and accuracy of the saddlepoint approximation over the normal asymptotic facilitate the inversion of the weighted log-rank tests to determine nominal 95% confidence intervals for treatment effect with right censored data.  相似文献   

9.
Doubly adaptive biased coin design (DBCD) is an important family of response-adaptive randomization procedures for clinical trials. It uses sequentially updated estimation to skew the allocation probability to favor the treatment that has performed better thus far. An important assumption for the DBCD is the homogeneity assumption for the patient responses. However, this assumption may be violated in many sequential experiments. Here we prove the robustness of the DBCD against certain time trends in patient responses. Strong consistency and asymptotic normality of the design are obtained under some widely satisfied conditions. Also, we propose a general weighted likelihood method to reduce the bias caused by the heterogeneity in the inference after a trial. Some numerical studies are also presented to illustrate the finite sample properties of DBCD.  相似文献   

10.
Summary.  A controversial topic in obstetrics is the effect of walking on the probability of Caesarean section among women in labour. A major reason for the controversy is the presence of non-compliance that complicates the estimation of efficacy, the effect of treatment received on outcome. The intent-to-treat method does not estimate efficacy, and estimates of efficacy that are based directly on treatment received may be biased because they are not protected by randomization. However, when non-compliance occurs immediately after randomization, the use of a potential outcomes model with reasonable assumptions has made it possible to estimate efficacy and still to retain the benefits of randomization to avoid selection bias. In this obstetrics application, non-compliance occurs initially and later in one arm. Consequently some parameters cannot be uniquely estimated without making strong assumptions. This difficulty is circumvented by a new study design involving an additional randomization group and a novel potential outcomes model (principal stratification).  相似文献   

11.
Inverse probability weighting (IPW) can deal with confounding in non randomized studies. The inverse weights are probabilities of treatment assignment (propensity scores), estimated by regressing assignment on predictors. Problems arise if predictors can be missing. Solutions previously proposed include assuming assignment depends only on observed predictors and multiple imputation (MI) of missing predictors. For the MI approach, it was recommended that missingness indicators be used with the other predictors. We determine when the two MI approaches, (with/without missingness indicators) yield consistent estimators and compare their efficiencies.We find that, although including indicators can reduce bias when predictors are missing not at random, it can induce bias when they are missing at random. We propose a consistent variance estimator and investigate performance of the simpler Rubin’s Rules variance estimator. In simulations we find both estimators perform well. IPW is also used to correct bias when an analysis model is fitted to incomplete data by restricting to complete cases. Here, weights are inverse probabilities of being a complete case. We explain how the same MI methods can be used in this situation to deal with missing predictors in the weight model, and illustrate this approach using data from the National Child Development Survey.  相似文献   

12.
It is shown how the usual two-step estimator for the standard sample selection model can be seen as a method of moments estimator. Standard GMM theory can be brought to bear on this model, greatly simplifying the derivation of the asymptotic properties of this model. Using this setup, the asymptotic variance is derived in detail and a consistent estimator of it is obtained that is guaranteed to be positive definite, in contrast with the estimator given in the literature. It is demonstrated how the MM approach easily accommodates variations on the estimator, like the two-step IV estimator that handles endogenous regressors, and a two-step GLS estimator. Furthermore, it is shown that from the MM formulation, it is straightforward to derive various specification tests, in particular tests for selection bias, equivalence with the censored regression model, normality, homoskedasticity, and exogeneity.  相似文献   

13.
This paper addresses the problem of the probability density estimation in the presence of covariates when data are missing at random (MAR). The inverse probability weighted method is used to define a nonparametric and a semiparametric weighted probability density estimators. A regression calibration technique is also used to define an imputed estimator. It is shown that all the estimators are asymptotically normal with the same asymptotic variance as that of the inverse probability weighted estimator with known selection probability function and weights. Also, we establish the mean squared error (MSE) bounds and obtain the MSE convergence rates. A simulation is carried out to assess the proposed estimators in terms of the bias and standard error.  相似文献   

14.
In drug development, treatments are most often selected at Phase 2 for further development when an initial trial of a new treatment produces a result that is considered positive. This selection due to a positive result means, however, that an estimator of the treatment effect, which does not take account of the selection is likely to over‐estimate the true treatment effect (ie, will be biased). This bias can be large and researchers may face a disappointingly lower estimated treatment effect in further trials. In this paper, we review a number of methods that have been proposed to correct for this bias and introduce three new methods. We present results from applying the various methods to two examples and consider extensions of the examples. We assess the methods with respect to bias of estimation of the treatment effect and compare the probabilities that a bias‐corrected treatment effect estimate will exceed a decision threshold. Following previous work, we also compare average power for the situation where a Phase 3 trial is launched given that the bias‐corrected observed Phase 2 treatment effect exceeds a launch threshold. Finally, we discuss our findings and potential application of the bias correction methods.  相似文献   

15.
Inference for a scalar interest parameter in the presence of nuisance parameters is considered in terms of the conditional maximum-likelihood estimator developed by Cox and Reid (1987). Parameter orthogonality is assumed throughout. The estimator is analyzed by means of stochastic asymptotic expansions in three cases: a scalar nuisance parameter, m nuisance parameters from m independent samples, and a vector nuisance parameter. In each case, the expansion for the conditional maximum-likelihood estimator is compared with that for the usual maximum-likelihood estimator. The means and variances are also compared. In each of the cases, the bias of the conditional maximum-likelihood estimator is unaffected by the nuisance parameter to first order. This is not so for the maximum-likelihood estimator. The assumption of parameter orthogonality is crucial in attaining this result. Regardless of parametrization, the difference in the two estimators is first-order and is deterministic to this order.  相似文献   

16.
ABSTRACT

This paper deals with the problem of estimating the finite population mean in stratified random sampling by using two auxiliary variables. This paper proposed a ratio-cum-product exponential type estimator of population mean under different situations: (i) when there is presence of non-response and measurement errors on the study as well as auxiliary variables; (ii) when there is non-response on the study and auxiliary variables but with no measurement error; (iii) when there is complete response on study variable but there is presence of non-response and measurement error on the auxiliary variables and (iv) when there are complete response and measurement error on study as well as auxiliary variables. The expressions of the bias and mean square error of the proposed estimator have been obtained up to the first degree of approximation. The proposed estimator has been compared with usual unbiased estimator, ratio estimator and other existing estimators and the conditions obtained to show the efficacy of the proposed estimator over other considered estimators. Simulation study is carried out to support the theoretical findings.  相似文献   

17.
Treatment during cancer clinical trials sometimes involves the combination of multiple drugs. In addition, in recent years there has been a trend toward phase I/II trials in which a phase I and a phase II trial are combined into a single trial to accelerate drug development. Methods for the seamless combination of phases I and II parts are currently under investigation. In the phase II part, adaptive randomization on the basis of patient efficacy outcomes allocates more patients to the dose combinations considered to have higher efficacy. Patient toxicity outcomes are used for determining admissibility to each dose combination and are not used for selection of the dose combination itself. In cases where the objective is not to find the optimum dose combination solely for efficacy but regarding both toxicity and efficacy, the need exists to allocate patients to dose combinations with consideration of the balance of existing trade‐offs between toxicity and efficacy. We propose a Bayesian hierarchical model and an adaptive randomization with consideration for the relationship with toxicity and efficacy. Using the toxicity and efficacy outcomes of patients, the Bayesian hierarchical model is used to estimate the toxicity probability and efficacy probability in each of the dose combinations. Here, we use Bayesian moving‐reference adaptive randomization on the basis of desirability computed from the obtained estimator. Computer simulations suggest that the proposed method will likely recommend a higher percentage of target dose combinations than a previously proposed method.  相似文献   

18.
The performances of data-driven bandwidth selection procedures in local polynomial regression are investigated by using asymptotic methods and simulation. The bandwidth selection procedures considered are based on minimizing 'prelimit' approximations to the (conditional) mean-squared error (MSE) when the MSE is considered as a function of the bandwidth h . We first consider approximations to the MSE that are based on Taylor expansions around h=0 of the bias part of the MSE. These approximations lead to estimators of the MSE that are accurate only for small bandwidths h . We also consider a bias estimator which instead of using small h approximations to bias naïvely estimates bias as the difference of two local polynomial estimators of different order and we show that this estimator performs well only for moderate to large h . We next define a hybrid bias estimator which equals the Taylor-expansion-based estimator for small h and the difference estimator for moderate to large h . We find that the MSE estimator based on this hybrid bias estimator leads to a bandwidth selection procedure with good asymptotic and, for our Monte Carlo examples, finite sample properties.  相似文献   

19.
ABSTRACT

This article considers linear social interaction models under incomplete information that allow for missing outcome data due to sample selection. For model estimation, assuming that each individual forms his/her belief about the other members’ outcomes based on rational expectations, we propose a two-step series nonlinear least squares estimator. Both the consistency and asymptotic normality of the estimator are established. As an empirical illustration, we apply the proposed model and method to National Longitudinal Study of Adolescent Health (Add Health) data to examine the impacts of friendship interactions on adolescents’ academic achievements. We provide empirical evidence that the interaction effects are important determinants of grade point average and that controlling for sample selection bias has certain impacts on the estimation results. Supplementary materials for this article are available online.  相似文献   

20.
This paper is concerned with Hintsberger type weighted shrinkage estimator of a parameter when a target value of the same is available. Expressions for the bias and the mean squared error of the estimator are derived. Some results concerning the bias, existence of uniformly minimum mean squared error estimator etc. are proved. For certain c to ices of the weight function, numerical results are presented for the pretest type weighted shrinkage estimator of the mean of normal as well as exponential distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号