首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In recent years, high failure rates in phase III trials were observed. One of the main reasons is overoptimistic assumptions for the planning of phase III resulting from limited phase II information and/or unawareness of realistic success probabilities. We present an approach for planning a phase II trial in a time‐to‐event setting that considers the whole phase II/III clinical development programme. We derive stopping boundaries after phase II that minimise the number of events under side conditions for the conditional probabilities of correct go/no‐go decision after phase II as well as the conditional success probabilities for phase III. In addition, we give general recommendations for the choice of phase II sample size. Our simulations show that unconditional probabilities of go/no‐go decision as well as the unconditional success probabilities for phase III are influenced by the number of events observed in phase II. However, choosing more than 150 events in phase II seems not necessary as the impact on these probabilities then becomes quite small. We recommend considering aspects like the number of compounds in phase II and the resources available when determining the sample size. The lower the number of compounds and the lower the resources are for phase III, the higher the investment for phase II should be. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Most clinical studies, which investigate the impact of therapy simultaneously, record the frequency of adverse events in order to monitor safety of the intervention. Study reports typically summarise adverse event data by tabulating the frequencies of the worst grade experienced but provide no details of the temporal profiles of specific types of adverse events. Such 'toxicity profiles' are potentially important tools in disease management and in the assessment of newer therapies including targeted treatments and immunotherapy where different types of toxicity may be more common at various times during long-term drug exposure. Toxicity profiles of commonly experienced adverse events occurring due to exposure to long-term treatment could assist in evaluating the costs of the health care benefits of therapy. We show how to generate toxicity profiles using an adaptation of the ordinal time-to-event model comprising of a two-step process, involving estimation of the multinomial response probabilities using multinomial logistic regression and combining these with recurrent time to event hazard estimates to produce cumulative event probabilities for each of the multinomial adverse event response categories. Such a model permits the simultaneous assessment of the risk of events over time and provides cumulative risk probabilities for each type of adverse event response. The method can be applied more generally by using different models to estimate outcome/response probabilities. The method is illustrated by developing toxicity profiles for three distinct types of adverse events associated with two treatment regimens for patients with advanced breast cancer.  相似文献   

3.
While the literature on multivariate models for continuous data flourishes, there is a lack of models for multivariate counts. We aim to contribute to this framework by extending the well known class of univariate hidden Markov models to the multidimensional case, by introducing multivariate Poisson hidden Markov models. Each state of the extended model is associated with a different multivariate discrete distribution. We consider different distributions with Poisson marginals, starting from the multivariate Poisson distribution and then extending to copula based distributions to allow flexible dependence structures. An EM type algorithm is developed for maximum likelihood estimation. A real data application is presented to illustrate the usefulness of the proposed models. In particular, we apply the models to the occurrence of strong earthquakes (surface wave magnitude ≥5), in three seismogenic subregions in the broad region of the North Aegean Sea for the time period from 1 January 1981 to 31 December 2008. Earthquakes occurring in one subregion may trigger events in adjacent ones and hence the observed time series of events are cross‐correlated. It is evident from the results that the three subregions interact with each other at times differing by up to a few months. This migration of seismic activity is captured by the model as a transition to a state of higher seismicity.  相似文献   

4.
We examine three media exposure distribution (e.d.) simulation methods. The first is based on the maximum likelihood estimate of an individual's exposure, the second on ‘personal probability’ (Greene 1970) and the third on a dependent Bernoulli trials model (Klotz 1973). The last method uses population exposure probabilities rather than individual exposure probabilities, thereby markedly reducing computation time. Magazine exposure data are used to compare the accuracy and computation times of the simulation methods with a log–linear e.d. model (Danaher 1988b) and the popular Metheringham (1964) model based on the beta–binomial distribution (BBD). The results show that the simulation methods are not as accurate as the log– linear model but are more accurate than Metheringham's model, However, all the simulation methods take less computation time than the log–linear model for schedules with more than six magazines, making them viable competitors for large schedule sizes  相似文献   

5.
We present a mathematical theory of objective, frequentist chance phenomena that uses as a model a set of probability measures. In this work, sets of measures are not viewed as a statistical compound hypothesis or as a tool for modeling imprecise subjective behavior. Instead we use sets of measures to model stable (although not stationary in the traditional stochastic sense) physical sources of finite time series data that have highly irregular behavior. Such models give a coarse-grained picture of the phenomena, keeping track of the range of the possible probabilities of the events. We present methods to simulate finite data sequences coming from a source modeled by a set of probability measures, and to estimate the model from finite time series data. The estimation of the set of probability measures is based on the analysis of a set of relative frequencies of events taken along subsequences selected by a collection of rules. In particular, we provide a universal methodology for finding a family of subsequence selection rules that can estimate any set of probability measures with high probability.  相似文献   

6.
Hierarchical models are rather common in uncertainty theory. They arise when there is a ‘correct’ or ‘ideal’ (the so-called first-order) uncertainty model about a phenomenon of interest, but the modeler is uncertain about what it is. The modeler's uncertainty is then called second-order uncertainty. For most of the hierarchical models in the literature, both the first- and the second-order models are precise, i.e., they are based on classical probabilities. In the present paper, I propose a specific hierarchical model that is imprecise at the second level, which means that at this level, lower probabilities are used. No restrictions are imposed on the underlying first-order model: that is allowed to be either precise or imprecise. I argue that this type of hierarchical model generalizes and includes a number of existing uncertainty models, such as imprecise probabilities, Bayesian models, and fuzzy probabilities. The main result of the paper is what I call precision–imprecision equivalence: the implications of the model for decision making and statistical reasoning are the same, whether the underlying first-order model is assumed to be precise or imprecise.  相似文献   

7.
We study the blocking probability in a continuous time loss queue, in which resources can be claimed a random time in advance. We identify classes of loss queues where the advance reservation results in increased or decreased blocking probabilities. The lower blocking probabilities are achieved because the system tends to favor short jobs. We provide analytical and numerical results to establish the connection between the system’s parameters and either an increase or decrease of blocking probabilities, compared to the system without reservation.  相似文献   

8.
We consider the problem of change-point detection in multivariate time-series. The multivariate distribution of the observations is supposed to follow a graphical model, whose graph and parameters are affected by abrupt changes throughout time. We demonstrate that it is possible to perform exact Bayesian inference whenever one considers a simple class of undirected graphs called spanning trees as possible structures. We are then able to integrate on the graph and segmentation spaces at the same time by combining classical dynamic programming with algebraic results pertaining to spanning trees. In particular, we show that quantities such as posterior distributions for change-points or posterior edge probabilities over time can efficiently be obtained. We illustrate our results on both synthetic and experimental data arising from biology and neuroscience.  相似文献   

9.
Many probability distributions can be represented as compound distributions. Consider some parameter vector as random. The compound distribution is the expected distribution of the variable of interest given the random parameters. Our idea is to define a partition of the domain of definition of the random parameters, so that we can represent the expected density of the variable of interest as a finite mixture of conditional densities. We then model the mixture probabilities of the conditional densities using information on population categories, thus modifying the original overall model. We thus obtain specific models for sub-populations that stem from the overall model. The distribution of a sub-population of interest is thus completely specified in terms of mixing probabilities. All characteristics of interest can be derived from this distribution and the comparison between sub-populations easily proceeds from the comparison of the mixing probabilities. A real example based on EU-SILC data is given. Then the methodology is investigated through simulation.  相似文献   

10.
A model to accommodate time-to-event ordinal outcomes was proposed by Berridge and Whitehead. Very few studies have adopted this approach, despite its appeal in incorporating several ordered categories of event outcome. More recently, there has been increased interest in utilizing recurrent events to analyze practical endpoints in the study of disease history and to help quantify the changing pattern of disease over time. For example, in studies of heart failure, the analysis of a single fatal event no longer provides sufficient clinical information to manage the disease. Similarly, the grade/frequency/severity of adverse events may be more important than simply prolonged survival in studies of toxic therapies in oncology. We propose an extension of the ordinal time-to-event model to allow for multiple/recurrent events in the case of marginal models (where all subjects are at risk for each recurrence, irrespective of whether they have experienced previous recurrences) and conditional models (subjects are at risk of a recurrence only if they have experienced a previous recurrence). These models rely on marginal and conditional estimates of the instantaneous baseline hazard and provide estimates of the probabilities of an event of each severity for each recurrence over time. We outline how confidence intervals for these probabilities can be constructed and illustrate how to fit these models and provide examples of the methods, together with an interpretation of the results.  相似文献   

11.
This paper introduces a Markov-switching model in which transition probabilities depend on higher frequency indicators and their lags through polynomial weighting schemes. The MSV-MIDAS model is estimated through maximum likelihood (ML) methods with a slightly modified version of Hamilton’s filter. Monte Carlo simulations show that ML provides accurate estimates, but they suggest some caution in interpreting the tests of the parameters in the transition probabilities. We apply this new model to forecast business cycle turning points in the United States. We properly detect recessions by exploiting the link between GDP growth and higher frequency variables from financial and energy markets.  相似文献   

12.
In bone marrow transplantation studies, patients are followed over time and a number of events may be observed. These include both ultimate events like death and relapse and transient events like graft versus host disease and graft recovery. Such studies, therefore, lend themselves for using an analytic approach based on multi-state models. We will give a review of such methods with emphasis on regression models for both transition intensities and transition- and state occupation probabilities. Both semi-parametric models, like the Cox regression model, and parametric models based on piecewise constant intensities will be discussed.  相似文献   

13.
Abstract

Occupancy models are used in statistical ecology to estimate species dispersion. The two components of an occupancy model are the detection and occupancy probabilities, with the main interest being in the occupancy probabilities. We show that for the homogeneous occupancy model there is an orthogonal transformation of the parameters that gives a natural two-stage inference procedure based on a conditional likelihood. We then extend this to a partial likelihood that gives explicit estimators of the model parameters. By allowing the separate modeling of the detection and occupancy probabilities, the extension of the two-stage approach to more general models has the potential to simplify the computational routines used there.  相似文献   

14.
Understanding patterns in the frequency of extreme natural events, such as earthquakes, is important as it helps in the prediction of their future occurrence and hence provides better civil protection. Distributions describing these events are known to be heavy tailed and positive skew making standard distributions unsuitable for modelling the frequency of such events. The Birnbaum–Saunders distribution and its extreme value version have been widely studied and applied due to their attractive properties. We derive L-moment equations for these distributions and propose novel methods for parameter estimation, goodness-of-fit assessment and model selection. A simulation study is conducted to evaluate the performance of the L-moment estimators, which is compared to that of the maximum likelihood estimators, demonstrating the superiority of the proposed methods. To illustrate these methods in a practical application, a data analysis of real-world earthquake magnitudes, obtained from the global centroid moment tensor catalogue during 1962–2015, is carried out. This application identifies the extreme value Birnbaum–Saunders distribution as a better model than classic extreme value distributions for describing seismic events.  相似文献   

15.
A model for media exposure probabilities is developed which has the joint probability of exposure proportional to the product of the marginal probabilities. The model is a generalization of Goodhardt & Ehrenberg's ‘duplication of viewing law’, with the duplication constant computed from a truncated canonical expansion of the joint exposure probability. The proposed model is compared on the basis of estimation accuracy and computation speed with an accurate and quick ‘approximate’ log-linear model (as noted previously)and the popular Metheringham beta-binomial model. Our model is shown to be more accurate than the approximate log-linear model and four times faster. In addition, it is much more accurate than Metheringham's model.  相似文献   

16.
In many research fields, scientific questions are investigated by analyzing data collected over space and time, usually at fixed spatial locations and time steps and resulting in geo-referenced time series. In this context, it is of interest to identify potential partitions of the space and study their evolution over time. A finite space-time mixture model is proposed to identify level-based clusters in spatio-temporal data and study their temporal evolution along the time frame. We anticipate space-time dependence by introducing spatio-temporally varying mixing weights to allocate observations at nearby locations and consecutive time points with similar cluster’s membership probabilities. As a result, a clustering varying over time and space is accomplished. Conditionally on the cluster’s membership, a state-space model is deployed to describe the temporal evolution of the sites belonging to each group. Fully posterior inference is provided under a Bayesian framework through Monte Carlo Markov chain algorithms. Also, a strategy to select the suitable number of clusters based upon the posterior temporal patterns of the clusters is offered. We evaluate our approach through simulation experiments, and we illustrate using air quality data collected across Europe from 2001 to 2012, showing the benefit of borrowing strength of information across space and time.  相似文献   

17.
This article considers a time series model with a deterministic trend, in which multiple structural changes are explicitly taken into account, while the number and the location of change-points are unknown. We aim to figure out the best model with the appropriate number of change-points and a certain length of segments between points. We derive a posterior probability and then apply a genetic algorithm (GA) to calculate the posterior probabilities to locate the change-points. GA results in a powerful flexible tool which is shown to search over possible change-points. Numerical results obtained from simulation experiments show excellent empirical properties. To verify our model retrospectively, we estimate structural change-points with US and South Korean GDP data.  相似文献   

18.
We consider graphs, confidence procedures and tests that can be used to compare transition probabilities in a Markov chain model with intensities specified by a Cox proportional hazard model. Under assumptions of this model, the regression coefficients provide information about the relative risks of covariates in one–step transitions, however, they cannot in general be used to to assess whether or not the covariates have a beneficial or detrimental effect on the endpoint events. To alleviate this problem, we consider graphical tests based on confidence procedures for a generalized Q–Q plot and for the difference between transition probabilities. The procedures are illustrated using data of the International Bone Marrow Transplant Registry.  相似文献   

19.
This paper is concerned with developing procedures for construcing confidence intervals, which would hold approximately equal tail probabilities and coverage probabilities close to the normal, for the scale parameter θ of the two-parameter exponential lifetime model when the data are time censored. We use a conditional approach to eliminate the nuisance parameter and develop several procedures based on the conditional likelihood. The methods are (a) a method based on the likelihood ratio, (b) a method based on the skewness corrected score (Bartlett, Biometrika 40 (1953), 12–19), (c) a method based on an adjustment to the signed root likelihood ratio (Diciccio, Field et al., Biometrika 77 (1990), 77–95), and (d) a method based on parameter transformation to the normal approximation. The performances of these procedures are then compared, through simulations, with the usual likelihood based procedure. The skewness corrected score procedure performs best in terms of holding both equal tail probabilities and nominal coverage probabilities even for small samples.  相似文献   

20.
In this paper, we report some results on the exact significance level when the usual F-statistic is used in a linear regression model with autocorrelated disturbances. The exact tail area probabilities sometimes differ substantially from the nominal size used in an ‘F-test’ and from upper-bound probabilities derived by Kiviet (1979) which do not depend on the values of the regressors. A similar conclusion is also reached for the exact size of the significance tests for the spurious regressions considered by Granger and Newbold (1974, 1977). The results indicate once more that one has to be careful when using an algebraic F-test in the presence of autoregressive errors. However then too, the Durbin-Watson test is expected to indicate the presence of autocorrelation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号