首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In an observational study in which each treated subject is matched to several untreated controls by using observed pretreatment covariates, a sensitivity analysis asks how hidden biases due to unobserved covariates might alter the conclusions. The bounds required for a sensitivity analysis are the solution to an optimization problem. In general, this optimization problem is not separable, in the sense that one cannot find the needed optimum by performing a separate optimization in each matched set and combining the results. We show, however, that this optimization problem is asymptotically separable, so that when there are many matched sets a separate optimization may be performed in each matched set and the results combined to yield the correct optimum with negligible error. This is true when the Wilcoxon rank sum test or the Hodges-Lehmann aligned rank test is applied in matching with multiple controls. Numerical calculations show that the asymptotic approximation performs well with as few as 10 matched sets. In the case of the rank sum test, a table is given containing the separable solution. With this table, only simple arithmetic is required to conduct the sensitivity analysis. The method also supplies estimates, such as the Hodges-Lehmann estimate, and confidence intervals associated with rank tests. The method is illustrated in a study of dropping out of US high schools and the effects on cognitive test scores.  相似文献   

2.
A sensitivity analysis displays the increase in uncertainty that attends an inference when a key assumption is relaxed. In matched observational studies of treatment effects, a key assumption in some analyses is that subjects matched for observed covariates are comparable, and this assumption is relaxed by positing a relevant covariate that was not observed and not controlled by matching. What properties would such an unobserved covariate need to have to materially alter the inference about treatment effects? For ease of calculation and reporting, it is convenient that the sensitivity analysis be of low dimension, perhaps indexed by a scalar sensitivity parameter, but for interpretation in specific contexts, a higher dimensional analysis may be of greater relevance. An amplification of a sensitivity analysis is defined as a map from each point in a low dimensional sensitivity analysis to a set of points, perhaps a 'curve,' in a higher dimensional sensitivity analysis such that the possible inferences are the same for all points in the set. Possessing an amplification, an investigator may calculate and report the low dimensional analysis, yet have available the interpretations of the higher dimensional analysis.  相似文献   

3.
In observational studies for the interaction between exposures on a dichotomous outcome of a certain population, usually one parameter of a regression model is used to describe the interaction, leading to one measure of the interaction. In this article we use the conditional risk of an outcome given exposures and covariates to describe the interaction and obtain five different measures of the interaction, that is, difference between the marginal risk differences, ratio of the marginal risk ratios, ratio of the marginal odds ratios, ratio of the conditional risk ratios, and ratio of the conditional odds ratios. These measures reflect different aspects of the interaction. By using only one regression model for the conditional risk, we obtain the maximum-likelihood (ML)-based point and interval estimates of these measures, which are most efficient due to the nature of ML. We use the ML estimates of the model parameters to obtain the ML estimates of these measures. We use the approximate normal distribution of the ML estimates of the model parameters to obtain approximate non-normal distributions of the ML estimates of these measures and then confidence intervals of these measures. The method can be easily implemented and is presented via a medical example.  相似文献   

4.
A number of nonstationary models have been developed to estimate extreme events as function of covariates. A quantile regression (QR) model is a statistical approach intended to estimate and conduct inference about the conditional quantile functions. In this article, we focus on the simultaneous variable selection and parameter estimation through penalized quantile regression. We conducted a comparison of regularized Quantile Regression model with B-Splines in Bayesian framework. Regularization is based on penalty and aims to favor parsimonious model, especially in the case of large dimension space. The prior distributions related to the penalties are detailed. Five penalties (Lasso, Ridge, SCAD0, SCAD1 and SCAD2) are considered with their equivalent expressions in Bayesian framework. The regularized quantile estimates are then compared to the maximum likelihood estimates with respect to the sample size. A Markov Chain Monte Carlo (MCMC) algorithms are developed for each hierarchical model to simulate the conditional posterior distribution of the quantiles. Results indicate that the SCAD0 and Lasso have the best performance for quantile estimation according to Relative Mean Biais (RMB) and the Relative Mean-Error (RME) criteria, especially in the case of heavy distributed errors. A case study of the annual maximum precipitation at Charlo, Eastern Canada, with the Pacific North Atlantic climate index as covariate is presented.  相似文献   

5.
Parametric mixed-effects logistic models can provide effective analysis of binary matched-pairs data. Responses are assumed to follow a logistic model within pairs, with an intercept which varies across pairs according to a specified family of probability distributions G. In this paper we give necessary and sufficient conditions for consistent covariate effect estimation and present a geometric view of estimation which shows that when the assumed family of mixture distributions is rich enough, estimates of the effect of the binary covariate are typically consistent. The geometric view also shows that under the conditions for consistent estimation, the mixed-model estimator is identical to the familar conditional-likelihood estimator for matched pairs. We illustrate the findings with some examples.  相似文献   

6.
Simultaneous tolerance intervals developed by Limam and Thomas (19881, for the normal regression model, are generalized to the random one-way model with covariates. Simultaneous tolerance intervals for unit means are developed for the balanced model. A simulation study is used to estimate the exact confidence of the tolerance intervals for models with one covariate.  相似文献   

7.
We propose more efficient L-estimates by using pairwise averages of the observations instead of the observations themselves. For instance, we show that minimum variance quantile estimation of the mean parameter in the exponential distribution improves from 65% to 88%. Simulations show similar improvements in frequently used scale and location estimators like the interquartile range, MAD, and trimmed mean.  相似文献   

8.
This paper uses the decomposition framework from the economics literature to examine the statistical structure of treatment effects estimated with observational data compared to those estimated from randomized studies. It begins with the estimation of treatment effects using a dummy variable in regression models and then presents the decomposition method from economics which estimates separate regression models for the comparison groups and recovers the treatment effect using bootstrapping methods. This method shows that the overall treatment effect is a weighted average of structural relationships of patient features with outcomes within each treatment arm and differences in the distributions of these features across the arms. In large randomized trials, it is assumed that the distribution of features across arms is very similar. Importantly, randomization not only balances observed features but also unobserved. Applying high dimensional balancing methods such as propensity score matching to the observational data causes the distributional terms of the decomposition model to be eliminated but unobserved features may still not be balanced in the observational data. Finally, a correction for non-random selection into the treatment groups is introduced via a switching regime model. Theoretically, the treatment effect estimates obtained from this model should be the same as those from a randomized trial. However, there are significant challenges in identifying instrumental variables that are necessary for estimating such models. At a minimum, decomposition models are useful tools for understanding the relationship between treatment effects estimated from observational versus randomized data.  相似文献   

9.
We evaluate the effects of college choice on earnings using Swedish register databases. This case study is used to motivate the introduction of a novel procedure to analyse the sensitivity of such an observational study to the assumption made that there are no unobserved confounders – variables affecting both college choice and earnings. This assumption is not testable without further information, and should be considered an approximation of reality. To perform a sensitivity analysis, we measure the departure from the unconfoundedness assumption with the correlation between college choice and earnings when conditioning on observed covariates. The use of a correlation as a measure of dependence allows us to propose a standardised procedure by advocating the use of a fixed value for the correlation, typically 1% or 5%, when checking the sensitivity of an evaluation study. A correlation coefficient is, moreover, intuitive to most empirical scientists, which makes the results of our sensitivity analysis easier to communicate than those of previously proposed methods. In our evaluation of the effects of college choice on earnings, the significantly positive effect obtained could not be questioned by a sensitivity analysis allowing for unobserved confounders inducing at most 5% correlation between college choice and earnings.  相似文献   

10.
In this paper, we consider posterior predictive distributions of Type-II censored data for an inverse Weibull distribution. These functions are given by using conditional density functions and conditional survival functions. Although the conditional survival functions were expressed by integral forms in previous studies, we derive the conditional survival functions in closed forms and thereby reduce the computation cost. In addition, we calculate the predictive confidence intervals of unobserved values and coverage probabilities of unobserved values by using the posterior predictive survival functions.  相似文献   

11.
We introduce distribution-free permutation tests and corresponding estimates for studying the effect of a treatment variable x on a response y. The methods apply in the presence of a multivariate covariate z. They are based on the assumption that the treatment values are assigned randomly to the subjects.  相似文献   

12.
In this article, we propose a non-parametric quantile inference procedure for cause-specific failure probabilities to estimate the lifetime distribution of length-biased and right-censored data with competing risks. We also derive the asymptotic properties of the proposed estimators of the quantile function. Furthermore, the results are used to construct confidence intervals and bands for the quantile function. Simulation studies are conducted to illustrate the method and theory, and an application to an unemployment data is presented.  相似文献   

13.
When estimating population quantiles via a random sample from an unknown continuous distribution function it is well known that a pair of order statistics may be used to set a confidence interval for any single desired, population quantile. In this paper the technique is generalized so that more than one pair of order statistics may be used to obtain simultaneous confidence intervals for the various quantiles that might be required. The generalization immediately extends to the problem of obtaining interval estimates for quantile intervals. Distributions of the ordered and unordered probability coverages of these confidence intervals are discussed as are the associated distributions of linear combinations of the coverages.  相似文献   

14.
Abstract

In this work, we propose and investigate a family of non parametric quantile regression estimates. The proposed estimates combine local linear fitting and double kernel approaches. More precisely, we use a Beta kernel when covariate’s support is compact and Gamma kernel for left-bounded supports. Finite sample properties together with asymptotic behavior of the proposed estimators are presented. It is also shown that these estimates enjoy the property of having finite variance and resistance to sparse design.  相似文献   

15.
We define a parametric proportional odds frailty model to describe lifetime data incorporating heterogeneity between individuals. An unobserved individual random effect, called frailty, acts multiplicatively on the odds of failure by time t. We investigate fitting by maximum likelihood and by least squares. For the latter, the parametric survivor function is fitted to the nonparametric Kaplan–Meier estimate at the observed failure times. Bootstrap standard errors and confidence intervals are obtained for the least squares estimates. The models are applied successfully to simulated data and to two real data sets. Least squares estimates appear to have smaller bias than maximum likelihood.  相似文献   

16.
The mixed effects models with two variance components are often used to analyze longitudinal data. For these models, we compare two approaches to estimating the variance components, the analysis of variance approach and the spectral decomposition approach. We establish a necessary and sufficient condition for the two approaches to yield identical estimates, and some sufficient conditions for the superiority of one approach over the other, under the mean squared error criterion. Applications of the methods to circular models and longitudinal data are discussed. Furthermore, simulation results indicate that better estimates of variance components do not necessarily imply higher power of the tests or shorter confidence intervals.  相似文献   

17.
A method is proposed for estimating regression parameters from data containing covariate measurement errors by using Stein estimates of the unobserved true covariates. The method produces consistent estimates for the slope parameter in the classical linear errors-in-variables model and applies to a broad range of nonlinear regression problems, provided the measurement error is Gaussian with known variance. Simulations are used to examine the performance of the estimates in a nonlinear regression problem and to compare them with the usual naive ones obtained by ignoring error and with other estimates proposed recently in the literature.  相似文献   

18.
In this paper we consider the problem of constructing confidence intervals for nonparametric quantile regression with an emphasis on smoothing splines. The mean‐based approaches for smoothing splines of Wahba (1983) and Nychka (1988) may not be efficient for constructing confidence intervals for the underlying function when the observed data are non‐Gaussian distributed, for instance if they are skewed or heavy‐tailed. This paper proposes a method of constructing confidence intervals for the unknown τth quantile function (0<τ<1) based on smoothing splines. In this paper we investigate the extent to which the proposed estimator provides the desired coverage probability. In addition, an improvement based on a local smoothing parameter that provides more uniform pointwise coverage is developed. The results from numerical studies including a simulation study and real data analysis demonstrate the promising empirical properties of the proposed approach.  相似文献   

19.
Propensity score matching (PSM) has been widely used to reduce confounding biases in observational studies. Its properties for statistical inference have also been investigated and well documented. However, some recent publications showed concern of using PSM, especially on increasing postmatching covariate imbalance, leading to discussion on whether PSM should be used or not. We review empirical and theoretical evidence for and against its use in practice and revisit the property of equal percent bias reduction and adapt it to more practical situations, showing that PSM has some additional desirable properties. With a small simulation, we explore the impact of caliper width on biases due to mismatching in matched samples and due to the difference between matched and target populations and show some issue of PSM may be due to inadequate caliper selection. In summary, we argue that the right question should be when and how to use PSM rather than to use or not to use it and give suggestions accordingly.  相似文献   

20.
This paper develops a smoothed empirical likelihood (SEL)-based method to construct confidence intervals for quantile regression parameters with auxiliary information. First, we define the SEL ratio and show that it follows a Chi-square distribution. We then construct confidence intervals according to this ratio. Finally, Monte Carlo experiments are employed to evaluate the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号