首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
The prevalence of obesity among US citizens has grown rapidly over the last few decades, especially among low-income individuals. This has led to questions about the effectiveness of nutritional assistance programs such as the Supplemental Nutrition Assistance Program (SNAP). Previous results on the effect of SNAP participation on obesity are mixed. These findings are however based on the assumption that participation status can be accurately observed, despite significant misclassification errors reported in the literature. Using propensity score matching, we conclude that there seems to be a positive effect of SNAP participation on obesity rates for female participants and no such effect for males, a result that is consistent with several previous studies. However, an extensive sensitivity analysis reveals that the positive effect for females is sensitive to misclassification errors and to the conditional independence assumption. Thus analogous findings should also be used with caution unless examined under the prism of classification errors and of other assumptions used for the identification of causal parameters.  相似文献   

3.
In simulation studies for discriminant analysis, misclassification errors are often computed using the Monte Carlo method, by testing a classifier on large samples generated from known populations. Although large samples are expected to behave closely to the underlying distributions, they may not do so in a small interval or region, and thus may lead to unexpected results. We demonstrate with an example that the LDA misclassification error computed via the Monte Carlo method may often be smaller than the Bayes error. We give a rigorous explanation and recommend a method to properly compute misclassification errors.  相似文献   

4.
Previous work has been carried out on the use of double sampling schemes for inference from binomial data which are subject to misclassification. The double sampling scheme utilizes a sample of n units which are classified by both a fallible and a true device and another sample of n2 units which are classified only by a fallible device. A triple sampljng scheme incorporates an additional sample of nl units which are classified only by the true device. In this paper we apply this triple sampling to estimation from binomialdata. First estimation of a binomial proportion is discussed under different misclassification structures. Then, the problem of optimal allocation of sample sizes is discussed.  相似文献   

5.
This article considers the use of sports board games to introduce or illustrate a wide variety of probability concepts to introductory statistics students in an integrated manner. We demonstrate the use of a single game (Strat-O-Matic® Baseball) to introduce probability distributions, sample spaces, the laws of addition and multiplication of probabilities, independence, mutual exclusivity, randomization and independence, conditional probability, and Bayes' Theorem. Empirical and anecdotal evidence suggests that student comprehension and retention are enhanced by use of examples constructed from the simple and interesting contexts provided by a sports board game.  相似文献   

6.
In recent years, there has been a great deal of literature published concerning the identification of predictive biomarkers and indeed, an increasing number of therapies have been licenced on this basis. However, this progress has been made almost exclusively on the basis of biomarkers measured prior to exposure to treatment. There are quite different challenges when the responding population can only be identified on the basis of outcomes observed following exposure to treatment, especially if it represents only a small proportion of patients. The purpose of this paper is to explore whether or when a treatment could be licenced on the basis of post‐treatment predictive biomarkers (PTPB), the focus is on oncology but the concepts should apply to all therapeutic areas. We review the potential pitfalls in hypothesising the presence of a PTPB. We also present challenges in trial design required to confirm and licence on the basis of a PTPB: what's the control population?, could there be a detriment to non‐responders by exposure to the new treatment?, can responders be identified rapidly?, could prior exposure to the new treatment adversely affect performance of the control in responders? Nevertheless, if the patients to be treated could be rapidly identified after prior exposure to treatment, and without harm to non‐responders, in appropriately designed and analysed trials, may be more targeted therapies could be made available to patients. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
The relative accuracy of estimators in recovering supply and demand parameters can depend on the market institutions that generate the data. The parameters of known supply and demand functions are estimated with data from laboratory market experiments with human buyers and sellers. Single-equation estimators dominate simultaneous-equations estimators in recovering supply and demand parameters from posted-offer market data. The inaccuracy of simultaneous-equations estimators with posted-offer data can be explained by the implications for error distributions of the inherent properties of this market institution. Simultaneous-equations estimators perform better with closing-price data from double-auction markets.  相似文献   

8.
Consider assessing the evidence for an exposure variable and a disease variable being associated, when the true exposure variable is more costly to obtain than an error‐prone but nondifferential surrogate exposure variable. From a study design perspective, there are choices regarding the best use of limited resources. Should one acquire the true exposure status for fewer subjects or the surrogate exposure status for more subjects? The issue of validation is also central, i.e., should we simultaneously measure the true and surrogate exposure variables on a subset of study subjects? Using large‐sample theory, we provide a framework for quantifying the power of testing for an exposure–disease association as a function of study cost. This enables us to present comparisons of different study designs under different suppositions about both the relative cost and the performance (sensitivity and specificity) of the surrogate variable. We present simulations to show the applicability of our theoretical framework, and we provide a case‐study comparing results from an actual study to what could have been seen had true exposure status been ascertained for a different proportion of study subjects. We also describe an extension of our ideas to a more complex situation involving covariates. The Canadian Journal of Statistics 47: 222–237; 2019 © 2019 Statistical Society of Canada  相似文献   

9.
In observational studies the assignment of units to treatments is not under control. Consequently, the estimation and comparison of treatment effects based on the empirical distribution of the responses can be biased since the units exposed to the various treatments could differ in important unknown pretreatment characteristics, which are related to the response. An important example studied in this article is the question of whether private schools offer better quality of education than public schools. In order to address this question we use data collected in the year 2000 by OECD for the Programme for International Student Assessment (PISA). Focusing for illustration on scores in mathematics of 15-years old pupils in Ireland, we find that the raw average score of pupils in private schools is higher than of pupils in public schools. However, application of a newly proposed method for observational studies suggests that the less able pupils tend to enroll in public schools, such that their lower scores is not necessarily an indication of bad quality of the public schools. Indeed, when comparing the average score in the two types of schools after adjusting for the enrollment effects, we find quite surprisingly that public schools perform better on average. This outcome is supported by the methods of instrumental variables and latent variables, commonly used by econometricians for analyzing and evaluating social programs.  相似文献   

10.
This paper examines both theoretically and empirically whether the common practice of using OLS multivariate regression models to estimate average treatment effects (ATEs) under experimental designs is justified by the Neyman model for causal inference. Using data from eight large U.S. social policy experiments, the paper finds that estimated standard errors and significance levels for ATE estimators are similar under the OLS and Neyman models when baseline covariates are included in the models, even though theory suggests that this may not have been the case. This occurs primarily because treatment effects do not appear to vary substantially across study subjects.  相似文献   

11.
The Idea of treating the random effects as fixed for constructing a test for a linear hypothesis (of fixed effects) in a mixed linear model is considered in this paper. The paper examines when such a test statistic can be computed and what are its distributional properties with respect to the actual mixed model.  相似文献   

12.
In this article we examine three concepts of fairness in employment decisions. Two of these concepts are widely known in the literature as “Fairness 1”and “Fairness 2”. The third concept, which we refer to as “Fairness 0”, is defined and introduced here. Fairness 0 applies to the hiring stage, whereas Fairness 1 and Fairness 2 apply to the placement or promotion stages of employment. Our results have important policy implications. We show that the three concepts of fairness can only rarely be achieved simultaneously.  相似文献   

13.
Universities have always been one of the key players in open access publishing and have encountered the particular obstacle that faces this Green model of open access, namely, disappointing author uptake. Today, the university has a unique opportunity to reinvent and to reinvigorate the model of the institutional repository. This article explores what is not working about the way we talk about repositories to authors today and how can we better meet faculty needs. More than an archive, a repository can be a showcase that allows scholars to build attractive scholarly profiles, and a platform to publish original content in emerging open-access journals.  相似文献   

14.
Hopes and expectations for the use and utility of new, emerging biomarkers in drug development have probably never been higher, especially in oncology. Biomarkers are exalted as vital patient selection tools in an effort to target those most likely to benefit from a new drug, and so to reduce development costs, lessen risk and expedite developments times. It is further hoped that biomarkers can be used as surrogate endpoints for clinical outcomes, to demonstrate effectiveness and, ultimately, to support drug approval. However, I perceive that all is not straightforward, and, particularly in terms of the promise of accelerated drug development, biomarker strategies may not in all cases deliver the advances and advantages hoped for.  相似文献   

15.
16.
Scheffé’s mixed model, generalized for application to multivariate repeated measures, is known as the multivariate mixed model (MMM). The primary advantages the MMM are (1) the minimum sample size required to conduct an analysis is smaller than for competing procedures and (2) for certain covariance structures, the MMM analysis is more powerful than its competitors. The primary disadvantage is that the MMM makes a very restrictive covariance assumption; namely multivariate sphericity. This paper shows, first, that even minor departures from multivariate sphericity inflate the size of MMM based tests. Accordingly, MMM analyses, as computed in release 4.0 of SPSS MANOVA (SPSS Inc., 1990), can not be recommended unless it is known that multivariate sphericity is satisfied. Second, it is shown that a new Box-type (Box, 1954) Δ-corrected MMM test adequately controls test size unless departure from multivariate sphericity is severe or the covariance matrix departs substantially from a multiplicative-Kronecker structure. Third, power functions of adjusted MMM tests for selected covariance and noncentrality structures are compared to those of doubly multivariate methods that do not require multivariate sphericity. Based on relative efficiency evaluations, the adjusted MMM analyses described in this paper can be recommended only when sample sizes are very small or there is reason to believe that multivariate sphericity is nearly satisfied. Neither the e-adjusted analysis suggested in the SPSS MANOVA output (release 4.0) nor the adjusted analysis suggested by Boik (1988) can be recommended at all.  相似文献   

17.
The ICH E9 guideline on Statistical Principles for Clinical Trials is a pivotal document for statisticians in clinical research in the pharmaceutical industry guiding, as it does, statistical aspects of the planning, conduct and analysis of regulatory clinical trials. New statisticians joining the industry require a thorough and lasting understanding of the 39-page guideline. Given the amount of detail to be covered, traditional (lecture-style) training methods are largely ineffective. Directed reading, perhaps in groups, may be a helpful approach, especially if experienced staff are involved in the discussions. However, as in many training scenarios, exercise-based training is often the most effective approach to learning. In this paper, we describe several variants of a training module in ICH E9 for new statisticians, combining directed reading with a game-based exercise, which have proved to be highly effective and enjoyable for course participants.  相似文献   

18.
Continuous outcomes are often dichotomized to classify trial subjects as responders or nonresponders, with the difference in rates of response between treatment and control defined as the “responder effect.” In this article, we caution that dichotomization of continuous interval outcomes may not be best practice. Defining clinical benefit or harm for continuous interval outcomes as the difference between the means of treatment and control, that is, the “continuous treatment effect,” we examine the case where treatment and control outcomes are normally distributed and differ only in location. For this case, continuous treatment effects may be considered clinically relevant if they exceed a prespecified minimum clinically important difference. In contrast, using minimum clinically important differences as dichotomization thresholds will not ensure clinically relevant responder effects. For example, in some situations, increasing the threshold may actually relax the criterion for effectiveness by increasing the calculated responder effect. Using responder effects to quantitatively assess benefit or risk of investigational drugs for continuous interval outcomes presents interpretational challenges. In particular, when the dichotomization threshold is halfway between the treatment and control outcome means, the responder effect is at a maximum with a magnitude monotonically related to the number of standard deviations between the mean outcomes of treatment and control. Large responder effect benefits may therefore reflect clinically unimportant continuous treatment effects amplified by small standard deviations, and small responder effect risks may reflect either clinically important continuous treatment effects minimized by large standard deviations, or selection of a dichotomization threshold not providing maximum responder effect.  相似文献   

19.
At the 22nd Annual North Carolina Serials Conference, focused on “Collaboration, Community, and Connection,” Linda Blake and Hilary Fredette of West Virginia University presented, ““Can we Lend?”: Communicating Interlibrary Loan Rights,” reviewing their experiences collaborating across an academic library to achieve the best possible interlibrary loan e-journal access within the bounds of sometimes inscrutable licenses.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号