共查询到5条相似文献,搜索用时 0 毫秒
1.
2.
We consider a hypothesis problem with directional alternatives. We approach the problem from a Bayesian decision theoretic point of view and consider a situation when one side of the alternatives is more important or more probable than the other. We develop a general Bayesian framework by specifying a mixture prior structure and a loss function related to the Kullback–Leibler divergence. This Bayesian decision method is applied to Normal and Poisson populations. Simulations are performed to compare the performance of the proposed method with that of a method based on a classical z-test and a Bayesian method based on the “0–1” loss. 相似文献
3.
Jörg Drechsler Agnes Dundler Stefan Bender Susanne Rässler Thomas Zwick 《AStA Advances in Statistical Analysis》2008,92(4):439-458
For micro-datasets considered for release as scientific or public use files, statistical agencies have to face the dilemma of guaranteeing the confidentiality of survey respondents on the one hand and offering sufficiently detailed data on the other hand. For that reason, a variety of methods to guarantee disclosure control is discussed in the literature. In this paper, we present an application of Rubin’s (J. Off. Stat. 9, 462–468, 1993) idea to generate synthetic datasets from existing confidential survey data for public release.We use a set of variables from the 1997 wave of the German IAB Establishment Panel and evaluate the quality of the approach by comparing results from an analysis by Zwick (Ger. Econ. Rev. 6(2), 155–184, 2005) with the original data with the results we achieve for the same analysis run on the dataset after the imputation procedure. The comparison shows that valid inferences can be obtained using the synthetic datasets in this context, while confidentiality is guaranteed for the survey participants. 相似文献
4.
Michelle Casey Evgeny Degtyarev María José Lechuga Paola Aimone Alain Ravaud Robert J. Motzer Feng Liu Viktoriya Stalbovskaya Rui Tang Emily Butler Oliver Sailer Susan Halabi Daniel George 《Pharmaceutical statistics》2021,20(2):324-334
The estimand framework requires a precise definition of the clinical question of interest (the estimand) as different ways of accounting for “intercurrent” events post randomization may result in different scientific questions. The initiation of subsequent therapy is common in oncology clinical trials and is considered an intercurrent event if the start of such therapy occurs prior to a recurrence or progression event. Three possible ways to account for this intercurrent event in the analysis are to censor at initiation, consider recurrence or progression events (including death) that occur before and after the initiation of subsequent therapy, or consider the start of subsequent therapy as an event in and of itself. The new estimand framework clarifies that these analyses address different questions (“does the drug delay recurrence if no patient had received subsequent therapy?” vs “does the drug delay recurrence with or without subsequent therapy?” vs “does the drug delay recurrence or start of subsequent therapy?”). The framework facilitates discussions during clinical trial planning and design to ensure alignment between the key question of interest, the analysis, and interpretation. This article is a result of a cross-industry collaboration to connect the International Council for Harmonisation E9 addendum concepts to applications. Data from previously reported randomized phase 3 studies in the renal cell carcinoma setting are used to consider common intercurrent events in solid tumor studies, and to illustrate different scientific questions and the consequences of the estimand choice for study design, data collection, analysis, and interpretation. 相似文献
5.
The integration of different data sources is a widely discussed topic among both the researchers and the Official Statistics. Integrating data helps to contain costs and time required by new data collections. The non-parametric micro Statistical Matching (SM) allows to integrate ‘live’ data resorting only to the observed information, potentially avoiding the misspecification bias and speeding the computational effort. Despite these pros, the assessment of the integration goodness when we use this method is not robust. Moreover, several applications comply with some commonly accepted practices which recommend e.g. to use the biggest data set as donor. We propose a validation strategy to assess the integration goodness. We apply it to investigate these practices and to explore how different combinations of the SM techniques and distance functions perform in terms of the reliability of the synthetic (complete) data set generated. The validation strategy takes advantage of the relation existing among the variables pre-and-post the integration. The results show that ‘the biggest, the best’ rule must not be considered mandatory anymore. Indeed, the integration goodness increases in relation to the variability of the matching variables rather than with respect to the dimensionality ratio between the recipient and the donor data set. 相似文献