首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Randomized controlled trials (RCTs) are the gold standard for evaluation of the efficacy and safety of investigational interventions. If every patient in an RCT were to adhere to the randomized treatment, one could simply analyze the complete data to infer the treatment effect. However, intercurrent events (ICEs) including the use of concomitant medication for unsatisfactory efficacy, treatment discontinuation due to adverse events, or lack of efficacy may lead to interventions that deviate from the original treatment assignment. Therefore, defining the appropriate estimand (the appropriate parameter to be estimated) based on the primary objective of the study is critical prior to determining the statistical analysis method and analyzing the data. The International Council for Harmonisation (ICH) E9 (R1), adopted on November 20, 2019, provided five strategies to define the estimand: treatment policy, hypothetical, composite variable, while on treatment, and principal stratum. In this article, we propose an estimand using a mix of strategies in handling ICEs. This estimand is an average of the “null” treatment difference for those with ICEs potentially related to safety and the treatment difference for the other patients if they would complete the assigned treatments. Two examples from clinical trials evaluating antidiabetes treatments are provided to illustrate the estimation of this proposed estimand and to compare it with the estimates for estimands using hypothetical and treatment policy strategies in handling ICEs.  相似文献   

2.
Sample Entropy (SampEn) statistics have provided insight into the amount of order present in several types of complex physiological time series, particularly the heart rate dynamics of premature infants. Very little, however, is known of SampEn's statistical properties and this has hindered strategies for optimization and significance testing. This article shows that SampEn statistics are asymptotically Gaussian under general conditions. A straightforward point estimate of the statistic's variance is developed and compared to empirical results obtained from complex surrogate data. Statistical tests are developed to quantify the amount and scale of order detected in a signal. These tests are used to show that significant order is, in fact, being detected in the heart rate dynamics of neonates, and to suggest strategies for optimizing the analysis parameters.  相似文献   

3.
Sequential experiment is an indispensable strategy and is applied immensely to various fields of science and engineering. In such experiments, it is desirable that a given design should retain the properties as much as possible when few runs are added to it. The designs based on sequential experiment strategy are called extended designs. In this paper, we have studied theoretical properties of such experimental strategies using uniformity measure. We have also derived a lower bound of extended designs under wrap-around L2-discrepancy measure. Moreover, we have provided an algorithm to construct uniform (or nearly uniform) extended designs. For ease of understanding, some examples are also presented and a lot of sequential strategies for a 27-run original design are tabulated for practice.  相似文献   

4.
In this paper, testing procedures based on double-sampling are proposed that yield gains in terms of power for the tests of General Linear Hypotheses. The distribution of a test statistic, involving both the measurements of the outcome on the smaller sample and of the covariates on the wider sample, is first derived. Then, approximations are provided in order to allow for a formal comparison between the powers of double-sampling and single-sampling strategies. Furthermore, it is shown how to allocate the measurements of the outcome and the covariates in order to maximize the power of the tests for a given experimental cost.  相似文献   

5.
When a candidate predictive marker is available, but evidence on its predictive ability is not sufficiently reliable, all‐comers trials with marker stratification are frequently conducted. We propose a framework for planning and evaluating prospective testing strategies in confirmatory, phase III marker‐stratified clinical trials based on a natural assumption on heterogeneity of treatment effects across marker‐defined subpopulations, where weak rather than strong control is permitted for multiple population tests. For phase III marker‐stratified trials, it is expected that treatment efficacy is established in a particular patient population, possibly in a marker‐defined subpopulation, and that the marker accuracy is assessed when the marker is used to restrict the indication or labelling of the treatment to a marker‐based subpopulation, ie, assessment of the clinical validity of the marker. In this paper, we develop statistical testing strategies based on criteria that are explicitly designated to the marker assessment, including those examining treatment effects in marker‐negative patients. As existing and developed statistical testing strategies can assert treatment efficacy for either the overall patient population or the marker‐positive subpopulation, we also develop criteria for evaluating the operating characteristics of the statistical testing strategies based on the probabilities of asserting treatment efficacy across marker subpopulations. Numerical evaluations to compare the statistical testing strategies based on the developed criteria are provided.  相似文献   

6.
We compare different Bayesian strategies for testing a parametric model versus a nonparametric alternative on the ground of their ability to solve the inconsistency problems arising when using the Bayes factor under certain conditions. A preliminary critical discussion of such an inconsistency is provided.  相似文献   

7.
Matrix models for population dynamics have recently been studied intensively and have many applications to theoretical and applied problems (conservation, management). The computer program ULM (Unified Life Models) collects a good part of the actual knowledge on the subject. It is a powerful tool to study the life cycle of species and meta-populations. In the general framework of discrete dynamical systems and symbolic computation, simple commands and convenient graphics are provided to assist the biologist. The main features of the program are shown through detailed examples: a simple model of a starling population life cycle is first presented leading to basic concepts (growth rates, stable age distribution, sensitivities); the same model is used to study competing strategies in a varying environment (extinction probabilities, stochastic sensitivities); a meta-population model with migrations is then presented; some results on migration strategies and evolutionary stable strategies are eventually proposed.  相似文献   

8.
Accounting for an auxiliary covariate in a two-phase sampling strategy in order to reduce the experimental costs was initially proposed by Cochran (Sampling Techniques, 2nd Edition, Wiley, New York, 1963, Sampling Techniques, 3rd Edition, Wiley, New York, 1977) in the context of sample surveys. Conniffe and Moran (Biometrics 28 (1972) 1011) have extended this methodology to the estimation of linear regression functions. More recently, Conniffe (J. Econometrics 27 (1985) 179) and Causeur and Dhorne (Biometrics 54 (4) (1998) 1591) have derived two-phase sampling estimators of the linear regression function in the situation where many auxiliary covariates are available. A detailed study of the distributional aspects of these estimators is provided by Causeur (Statistics 32 (1999) 297). In the same multivariate context, this paper aims at an extension of the double-sampling strategies to monotone designs accounting for differences between the costs of subsets of covariates. In particular, the maximum-likelihood estimators are provided and asymptotic solutions for the optimal designs are derived.  相似文献   

9.
This paper describes a nonparametric approach to make inferences for aggregate loss models in the insurance framework. We assume that an insurance company provides a historical sample of claims given by claim occurrence times and claim sizes. Furthermore, information may be incomplete as claims may be censored and/or truncated. In this context, the main goal of this work consists of fitting a probability model for the total amount that will be paid on all claims during a fixed future time period. In order to solve this prediction problem, we propose a new methodology based on nonparametric estimators for the density functions with censored and truncated data, the use of Monte Carlo simulation methods and bootstrap resampling. The developed methodology is useful to compare alternative pricing strategies in different insurance decision problems. The proposed procedure is illustrated with a real dataset provided by the insurance department of an international commercial company.  相似文献   

10.
This study examines the data that result from multiple promotional strategies when the data are autocorrelated. Time series intervention analysis is the traditional way to analyze such data, focusing on the effects of a single or a few interventions. Time series intervention analysis delivers good results, provided that there is a known and predetermined schedule of future interventions. This study opts for a different type of analysis. Instead of adopting the traditional time series intervention analysis with only one or a few interventions, this study explores the possibility of integrating time series intervention analysis and a knowledge-based system to analyze multiple-interventions data. This integrated approach does not require attempts to ascertain the effects of future interventions. Through the analysis of actual promotion data, this study shows the benefits of using the proposed method.  相似文献   

11.
In this paper, we consider the problem of hazard rate estimation in the presence of covariates, for survival data with censoring indicators missing at random. We propose in the context usually denoted by MAR (missing at random, in opposition to MCAR, missing completely at random, which requires an additional independence assumption), nonparametric adaptive strategies based on model selection methods for estimators admitting finite dimensional developments in functional orthonormal bases. Theoretical risk bounds are provided, they prove that the estimators behave well in term of mean square integrated error (MISE). Simulation experiments illustrate the statistical procedure.  相似文献   

12.
Nonparametric bootstrapping for hierarchical data is relatively underdeveloped and not straightforward: certainly it does not make sense to use simple nonparametric resampling, which treats all observations as independent. We have provided some resampling strategies of hierarchical data, proved that the strategy of nonparametric bootstrapping on the highest level (randomly sampling all other levels without replacement within the highest level selected by randomly sampling the highest levels with replacement) is better than that on lower levels, analyzed real data and performed simulation studies.  相似文献   

13.
Designed experiments are a key component in many companies' improvement strategies. Because completely randomized experiments are not always reasonable from a cost or physical perspective, split-plot experiments are prevalent. The recommended analysis accounts for the different sources of variation affecting whole-plot and split-plot error. However experiments on industrial processes must be run and, consequently analyzed quite differently from ones run in a controlled environment. Such experiments are typically subject to a wide array of uncontrolled, and barely understood, variation. In particular, it is important to examine the experimental results for additional, unanticipated sources of variation. In this paper, we consider how unanticipated, stratified effects may influence a split-plot experiment and discuss further exploratory analysis to indicate the presence of stratified effects. Examples of such experiments are provided, additional tests are suggested and discussed in light of their power, and recommendations given.  相似文献   

14.
The need to use rigorous, transparent, clearly interpretable, and scientifically justified methodology for preventing and dealing with missing data in clinical trials has been a focus of much attention from regulators, practitioners, and academicians over the past years. New guidelines and recommendations emphasize the importance of minimizing the amount of missing data and carefully selecting primary analysis methods on the basis of assumptions regarding the missingness mechanism suitable for the study at hand, as well as the need to stress‐test the results of the primary analysis under different sets of assumptions through a range of sensitivity analyses. Some methods that could be effectively used for dealing with missing data have not yet gained widespread usage, partly because of their underlying complexity and partly because of lack of relatively easy approaches to their implementation. In this paper, we explore several strategies for missing data on the basis of pattern mixture models that embody clear and realistic clinical assumptions. Pattern mixture models provide a statistically reasonable yet transparent framework for translating clinical assumptions into statistical analyses. Implementation details for some specific strategies are provided in an Appendix (available online as Supporting Information), whereas the general principles of the approach discussed in this paper can be used to implement various other analyses with different sets of assumptions regarding missing data. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
Rao ([1], p. 475, equation 8c.6.10) considers a certain minimization problem in which one is required to minimize a sum of quadratic forms subjected to certain linear restrictions. The proof given by Rao, although elegant, is involved and complicated as he uses N dimensional geometric and regression theory arguments. Perhaps, an alternative simpler and straightforward algebraic proof of this minimization problem might be of interest.  相似文献   

16.
Algorithms     
Abstract

The main reason for the limited use of multivariate discrete models is the difficulty in calculating the required probabilities. The task is usually undertaken via recursive relationships which become quite computationally demanding for high dimensions and large values. The present paper discusses efficient algorithms that make use of the recurrence relationships in a manner that reduces the computational effort and thus allow for easy and cheap calculation of the probabilities. The most common multivariate discrete distribution, the multivariate Poisson distribution is treated. Real data problems are provided to motivate the use of the proposed strategies. Extensions of our results are discussed. It is shown that probabilities, for a large family of multivariate distributions, can be computed efficiently via our algorithms.  相似文献   

17.
Adaptive sampling strategies for ecological and environmental studies are described in this paper. The motivations for adaptive sampling are discussed. Developments in this area over recent decades are reviewed. Adaptive cluster sampling and a number of its variations are described. The newer class of adaptive web sampling designs and their spatial sampling uses are discussed. Case studies in the use of adaptive sampling strategies with ecological populations are cited. The nature of optimal sampling strategies is described. Design-based and model-based approaches to inference with adaptive sampling strategies are summarized.  相似文献   

18.
The risk of a sampling strategy is a function on the parameter space, which is the set of all vectors composed of possible values of the variable of interest. It seems natural to ask for a minimax strategy, minimizing the maximal risk. So far answers have been provided for completely symmetric parameter spaces. Results available for more general spaces refer to sample size 1 or to large sample sizes allowing for asymptotic approximation. In the present paper we consider arbitrary sample sizes, derive a lower bound for the maximal risk under very weak conditions and obtain minimax strategies for a large class of parameter spaces. Our results do not apply to parameter spaces with strong deviations from symmetry. For such spaces a minimax strategy will prescribe to consider only a small number of samples and takes a non-random and purposive character, which is in accordance with the common practice of completely sampling a stratum of large units.  相似文献   

19.
Sampling from finite populations when there is autocorrelation between the population units, is the subject of study in this paper. The case when the autocorrelation function for the population is convex is examined. We provide, in the first instance, the best unbiased predictor of the population mean under the assumptions of the assumed model. For this predictor, the optimum class of sampling strategies under the model-complete criterion is determined. For practical applications, a subclass of the above class of strategies is considered and it is shown in Section 4 that the centrally located is the optimal one. Finally, some numerical examples and comparison study has been carried out in order to see, in practice, the merit of the optimal strategies suggested in the present paper.  相似文献   

20.
In practice, it is important to find optimal allocation strategies for continuous response with multiple treatments under some optimization criteria. In this article, we focus on exponential responses. For a multivariate test of homogeneity, we obtain the optimal allocation strategies to maximize power while (1) fixing sample size and (2) fixing expected total responses. Then the doubly adaptive biased coin design [Hu, F., Zhang, L.-X., 2004. Asymptotic properties of doubly adaptive biased coin designs for multi-treatment clinical trials. The Annals of Statistics 21, 268–301] is used to implement the optimal allocation strategies. Simulation results show that the proposed procedures have advantages over complete randomization with respect to both inferential (power) and ethical standpoints on average. It is important to note that one can usually implement optimal allocation strategies numerically for other continuous responses, though it is usually not easy to get the closed form of the optimal allocation theoretically.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号