首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   13篇
  免费   2篇
社会学   4篇
统计学   11篇
  2023年   1篇
  2021年   1篇
  2020年   1篇
  2019年   1篇
  2017年   3篇
  2014年   1篇
  2013年   2篇
  2012年   1篇
  2011年   1篇
  2008年   1篇
  2004年   1篇
  2003年   1篇
排序方式: 共有15条查询结果,搜索用时 62 毫秒
1.
Since the implementation of the International Conference on Harmonization (ICH) E14 guideline in 2005, regulators have required a “thorough QTc” (TQT) study for evaluating the effects of investigational drugs on delayed cardiac repolarization as manifested by a prolonged QTc interval. However, TQT studies have increasingly been viewed unfavorably because of their low cost effectiveness. Several researchers have noted that a robust drug concentration‐QTc (conc‐QTc) modeling assessment in early phase development should, in most cases, obviate the need for a subsequent TQT study. In December 2015, ICH released an “E14 Q&As (R3)” document supporting the use of conc‐QTc modeling for regulatory decisions. In this article, we propose a simple improvement of two popular conc‐QTc assessment methods for typical first‐in‐human crossover‐like single ascending dose clinical pharmacology trials. The improvement is achieved, in part, by leveraging routinely encountered (and expected) intrasubject correlation patterns encountered in such trials. A real example involving a single ascending dose and corresponding TQT trial, along with results from a simulation study, illustrate the strong performance of the proposed method. The improved conc‐QTc assessment will further enable highly reliable go/no‐go decisions in early phase clinical development and deliver results that support subsequent TQT study waivers by regulators.  相似文献   
2.
A randomized trial allows estimation of the causal effect of an intervention compared to a control in the overall population and in subpopulations defined by baseline characteristics. Often, however, clinical questions also arise regarding the treatment effect in subpopulations of patients, which would experience clinical or disease related events post-randomization. Events that occur after treatment initiation and potentially affect the interpretation or the existence of the measurements are called intercurrent events in the ICH E9(R1) guideline. If the intercurrent event is a consequence of treatment, randomization alone is no longer sufficient to meaningfully estimate the treatment effect. Analyses comparing the subgroups of patients without the intercurrent events for intervention and control will not estimate a causal effect. This is well known, but post-hoc analyses of this kind are commonly performed in drug development. An alternative approach is the principal stratum strategy, which classifies subjects according to their potential occurrence of an intercurrent event on both study arms. We illustrate with examples that questions formulated through principal strata occur naturally in drug development and argue that approaching these questions with the ICH E9(R1) estimand framework has the potential to lead to more transparent assumptions as well as more adequate analyses and conclusions. In addition, we provide an overview of assumptions required for estimation of effects in principal strata. Most of these assumptions are unverifiable and should hence be based on solid scientific understanding. Sensitivity analyses are needed to assess robustness of conclusions.  相似文献   
3.
The use of surrogate variables has been proposed as a means to capture, for a given observed set of data, sources driving the dependency structure among high-dimensional sets of features and remove the effects of those sources and their potential negative impact on simultaneous inference. In this article we illustrate the potential effects of latent variables on testing dependence and the resulting impact on multiple inference, we briefly review the method of surrogate variable analysis proposed by Leek and Storey (PNAS 2008; 105:18718-18723), and assess that method via simulations intended to mimic the complexity of feature dependence observed in real-world microarray data. The method is also assessed via application to a recent Merck microarray data set. Both simulation and case study results indicate that surrogate variable analysis can offer a viable strategy for tackling the multiple testing dependence problem when the features follow a potentially complex correlation structure, yielding improvements in the variability of false positive rates and increases in power.  相似文献   
4.
ABSTRACT

The GDP growth paradigm has come under increased scrutiny in recent years, with the rising threats of global social inequality, poverty, and environmental degradation. New thinking around ecosocialism, degrowth, happiness and the wellbeing economy insist on keeping alive the utopian imagination. It seeks to break through the constraints of traditional state-or-market development debates, by searching for ways to subordinate both the state and market to society, in harmony with non-human nature. Pioneering work on alternative development indicators has been done in recent times, the most notable being the Gross National Happiness Index, currently in practice within the small Buddhist mountain state of Bhutan. The happiness/wellbeing perspectives do not have an explicit critique of capitalism, and avoid any mention of ‘socialism’ or ‘ecosocialism'. Is there a Chinese Wall between the Buddha and Marx – or can these perspectives be harmonized, as part of building a broader counter-hegemonic movement?  相似文献   
5.
Sample sizes of Phase 2 dose-finding studies, usually determined based on a power requirement to detect a significant dose–response relationship, will generally not provide adequate precision for Phase 3 target dose selection. We propose to calculate the sample size of a dose-finding study based on the probability of successfully identifying the target dose within an acceptable range (e.g., 80%–120% of the target) using the multiple comparison and modeling procedure (MCP-Mod). With the proposed approach, different design options for the Phase 2 dose-finding study can also be compared. Due to inherent uncertainty around an assumed true dose–response relationship, sensitivity analyses to assess the robustness of the sample size calculations to deviations from modeling assumptions are recommended. Planning for a hypothetical Phase 2 dose-finding study is used to illustrate the main points. Codes for the proposed approach is available at https://github.com/happysundae/posMCPMod .  相似文献   
6.
VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations - Inspired by “extra-market” initiatives to ensure media diversity in social-demo- cratic Northern Europe, the...  相似文献   
7.
Over two decades ago, Brown and Forsythe (B-F) (1974) proposed an innovative solution to the problem of comparing independent normal means under heteroscedasticity. Since then, their testing procedure has gained in popularity and authors have published various articles in which the B-F test has formed the basis of their research. The purpose of this paper is to point out, and correct, a flaw in the B-F testing procedure. Specifically, it is shown that the approximation proposed by B-F for the null distribution of their test statistic is inadequate. An improved approximation is provided and the small sample null properties of the modified B-F test are studied via simulation. The empirical findings support the theoretical result that the modified B-F test does a better job of preserving the test size compared to the original B-F test.  相似文献   
8.
North American Indigenous adolescents smoke earlier, smoke more, and are more likely to become regular smokers as adults than youth from any other ethnic group, yet we know very little about their early smoking trajectories. We use multilevel growth modeling across five waves of data from Indigenous adolescents (aged 10–13 years at Wave 1) to investigate factors associated with becoming a daily smoker. Several factors, including number of peers who smoked at Wave 1 and meeting diagnostic criteria for major depressive episode and conduct disorder, were associated with early daily smoking. Only age and increases in the number of smoking peers were associated with increased odds of becoming a daily smoker.  相似文献   
9.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   
10.
In this paper, we extend Choi and Hall's [Data sharpening as a prelude to density estimation. Biometrika. 1999;86(4):941–947] data sharpening algorithm for kernel density estimation to interval-censored data. Data sharpening has several advantages, including bias and mean integrated squared error (MISE) reduction as well as increased robustness to bandwidth misspecification. Several interval metrics are explored for use with the kernel function in the data sharpening transformation. A simulation study based on randomly generated data is conducted to assess and compare the performance of each interval metric. It is found that the bias is reduced by sharpening, often with little effect on the variance, thus maintaining or reducing overall MISE. Applications involving time to onset of HIV and running distances subject to measurement error are used for illustration.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号