首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9篇
  免费   0篇
社会学   1篇
统计学   8篇
  2023年   1篇
  2021年   1篇
  2013年   2篇
  2009年   2篇
  2007年   1篇
  1985年   1篇
  1982年   1篇
排序方式: 共有9条查询结果,搜索用时 140 毫秒
1
1.
This paper reviews recent developments in the stochastic comparison of order statistics. The results discussed are basically: (l) Stochastic comparisons of linear combinations of order statistics from distributions F and G where G?1 F is convex or starshaped. (2) Stochastic comparisons of individual order statistics and of vectors of order statistics from underlying heterogeneous distributions by the use of majorization and Schur function theory. (3) Stochastic comparison of random processes. Applications to reliability problems are presented illustrating the use and value of the theoretical results described  相似文献   
2.
3.
4.
Extremal problems in large deviations of the F-statistic are considered. It is shown that the slowest rate of convergence over a specified class of distributions of the F-statistic is slower than exponential, and that the Bahadur efficiency of the F-statistic with respect to some distribution-free competitors is identically zero.  相似文献   
5.
Randomized controlled trials (RCTs) are the gold standard for evaluation of the efficacy and safety of investigational interventions. If every patient in an RCT were to adhere to the randomized treatment, one could simply analyze the complete data to infer the treatment effect. However, intercurrent events (ICEs) including the use of concomitant medication for unsatisfactory efficacy, treatment discontinuation due to adverse events, or lack of efficacy may lead to interventions that deviate from the original treatment assignment. Therefore, defining the appropriate estimand (the appropriate parameter to be estimated) based on the primary objective of the study is critical prior to determining the statistical analysis method and analyzing the data. The International Council for Harmonisation (ICH) E9 (R1), adopted on November 20, 2019, provided five strategies to define the estimand: treatment policy, hypothetical, composite variable, while on treatment, and principal stratum. In this article, we propose an estimand using a mix of strategies in handling ICEs. This estimand is an average of the “null” treatment difference for those with ICEs potentially related to safety and the treatment difference for the other patients if they would complete the assigned treatments. Two examples from clinical trials evaluating antidiabetes treatments are provided to illustrate the estimation of this proposed estimand and to compare it with the estimates for estimands using hypothetical and treatment policy strategies in handling ICEs.  相似文献   
6.
The current guidelines, ICH E14, for the evaluation of non-antiarrhythmic compounds require a 'thorough' QT study (TQT) conducted during clinical development (ICH Guidance for Industry E14, 2005). Owing to the regulatory choice of margin (10 ms), the TQT studies must be conducted to rigorous standards to ensure that variability is minimized. Some of the key sources of variation can be controlled by use of randomization, crossover design, standardization of electrocardiogram (ECG) recording conditions and collection of replicate ECGs at each time point. However, one of the key factors in these studies is the baseline measurement, which if not controlled and consistent across studies could lead to significant misinterpretation. In this article, we examine three types of baseline methods widely used in the TQT studies to derive a change from baseline in QTc (time-matched, time-averaged and pre-dose-averaged baseline). We discuss the impact of the baseline values on the guidance-recommended 'largest time-matched' analyses. Using simulation we have shown the impact of these baseline approaches on the type I error and power for both crossover and parallel group designs. In this article, we show that the power of study decreases as the number of time points tested in TQT study increases. A time-matched baseline method is recommended by several authors (Drug Saf. 2005; 28(2):115-125, Health Canada guidance document: guide for the analysis and review of QT/QTc interval data, 2006) due to the existence of the circadian rhythm in QT. However, the impact of the time-matched baseline method on statistical inference and sample size should be considered carefully during the design of TQT study. The time-averaged baseline had the highest power in comparison with other baseline approaches.  相似文献   
7.
There are several approaches to assess or demonstrate pharmacokinetic dose proportionality. One statistical method is the traditional ANOVA model, where dose proportionality is evaluated using the bioequivalence limits. A more informative method is the mixed effects Power Model, where dose proportionality is assessed using a decision rule for the estimated slope. Here we propose analytical derivations of sample sizes for various designs (including crossover, incomplete block and parallel group designs) to be analysed according to the Power Model.  相似文献   
8.
Consider a sequence of dependent random variables X1,X2,…,XnX1,X2,,Xn, where X1X1 has distribution F (or probability measure P  ), and the distribution of Xi+1Xi+1 given X1,…,XiX1,,Xi and other covariates and environmental factors depends on F   and the previous data, i=1,…,n-1i=1,,n-1. General repair models give rise to such random variables as the failure times of an item subject to repair. There exist nonparametric non-Bayes methods of estimating F in the literature, for instance, Whitaker and Samaniego [1989. Estimating the reliability of systems subject to imperfect repair. J. Amer. Statist. Assoc. 84, 301–309], Hollander et al. [1992. Nonparametric methods for imperfect repair models. Ann. Statist. 20, 879–896] and Dorado et al. [1997. Nonparametric estimation for a general repair model. Ann. Statist. 25, 1140–1160], etc. Typically these methods apply only to special repair models and also require repair data on N independent items until exactly only one item is left awaiting a “perfect repair”.  相似文献   
9.
Mixed model repeated measures (MMRM) is the most common analysis approach used in clinical trials for Alzheimer's disease and other progressive diseases measured with continuous outcomes over time. The model treats time as a categorical variable, which allows an unconstrained estimate of the mean for each study visit in each randomized group. Categorizing time in this way can be problematic when assessments occur off-schedule, as including off-schedule visits can induce bias, and excluding them ignores valuable information and violates the intention to treat principle. This problem has been exacerbated by clinical trial visits which have been delayed due to the COVID19 pandemic. As an alternative to MMRM, we propose a constrained longitudinal data analysis with natural cubic splines that treats time as continuous and uses test version effects to model the mean over time. Compared to categorical-time models like MMRM and models that assume a proportional treatment effect, the spline model is shown to be more parsimonious and precise in real clinical trial datasets, and has better power and Type I error in a variety of simulation scenarios.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号