首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 14 毫秒
1.
The incorporation of prior information about θ, where θ is the success probability in a binomial sampling model, is an essential feature of Bayesian statistics. Methodology based on information-theoretic concepts is introduced which (a) quantifies the amount of information provided by the sample data relative to that provided by the prior distribution and (b) allows for a ranking of prior distributions with respect to conservativeness, where conservatism refers to restraint of extraneous information about θ which is embedded in any prior distribution. In effect, the most conservative prior distribution from a specified class (each member o f which carries the available prior information about θ) is that prior distribution within the class over which the likelihood function has the greatest average domination. The most conservative prior distributions from five different families of prior distributions over the interval (0,1) including the beta distribution are determined and compared for three situations: (1) no prior estimate of θ is available, (2) a prior point estimate or θ is available, and (3) a prior interval estimate of θ is available. The results of the comparisons not only advocate the use of the beta prior distribution in binomial sampling but also indicate which particular one to use in the three aforementioned situations.  相似文献   

2.
The issues and dangers involved in testing multiple hypotheses are well recognised within the pharmaceutical industry. In reporting clinical trials, strenuous efforts are taken to avoid the inflation of type I error, with procedures such as the Bonferroni adjustment and its many elaborations and refinements being widely employed. Typically, such methods are conservative. They tend to be accurate if the multiple test statistics involved are mutually independent and achieve less than the type I error rate specified if these statistics are positively correlated. An alternative approach is to estimate the correlations between the test statistics and to perform a test that is conditional on those estimates being the true correlations. In this paper, we begin by assuming that test statistics are normally distributed and that their correlations are known. Under these circumstances, we explore several approaches to multiple testing, adapt them so that type I error is preserved exactly and then compare their powers over a range of true parameter values. For simplicity, the explorations are confined to the bivariate case. Having described the relative strengths and weaknesses of the approaches under study, we use simulation to assess the accuracy of the approximate theory developed when the correlations are estimated from the study data rather than being known in advance and when data are binary so that test statistics are only approximately normally distributed.  相似文献   

3.
The Multiple Comparison Procedures with Modeling Techniques (MCP-Mod) framework has been recently approved by the U.S. Food, Administration, and European Medicines Agency as fit-for-purpose for phase II studies. Nonetheless, this approach relies on the asymptotic properties of Maximum Likelihood (ML) estimators, which might not be reasonable for small sample sizes. In this paper, we derived improved ML estimators and correction for their covariance matrices in the censored Weibull regression model based on the corrective and preventive approaches. We performed two simulation studies to evaluate ML and improved ML estimators with their covariance matrices in (i) a regression framework (ii) the Multiple Comparison Procedures with Modeling Techniques framework. We have shown that improved ML estimators are less biased than ML estimators yielding Wald-type statistics that controls type I error without loss of power in both frameworks. Therefore, we recommend the use of improved ML estimators in the MCP-Mod approach to control type I error at nominal value for sample sizes ranging from 5 to 25 subjects per dose.  相似文献   

4.
This paper discusses the analysis of interval-censored failure time data, which has recently attracted a great amount of attention (Li and Pu, Lifetime Data Anal 9:57–70, 2003; Sun, The statistical analysis of interval-censored data, 2006; Tian and Cai, Biometrika 93(2):329–342, 2006; Zhang et al., Can J Stat 33:61–70, 2005). Interval-censored data mean that the survival time of interest is observed only to belong to an interval and they occur in many fields including clinical trials, demographical studies, medical follow-up studies, public health studies and tumorgenicity experiments. A major difficulty with the analysis of interval-censored data is that one has to deal with a censoring mechanism that involves two related variables. For the inference, we present a transformation approach that transforms general interval-censored data into current status data, for which one only needs to deal with one censoring variable and the inference is thus much easy. We apply this general idea to regression analysis of interval-censored data using the additive hazards model and numerical studies indicate that the method performs well for practical situations. An illustrative example is provided.  相似文献   

5.
The trimmed mean is a method of dealing with patient dropout in clinical trials that considers early discontinuation of treatment a bad outcome rather than leading to missing data. The present investigation is the first comprehensive assessment of the approach across a broad set of simulated clinical trial scenarios. In the trimmed mean approach, all patients who discontinue treatment prior to the primary endpoint are excluded from analysis by trimming an equal percentage of bad outcomes from each treatment arm. The untrimmed values are used to calculated means or mean changes. An explicit intent of trimming is to favor the group with lower dropout because having more completers is a beneficial effect of the drug, or conversely, higher dropout is a bad effect. In the simulation study, difference between treatments estimated from trimmed means was greater than the corresponding effects estimated from untrimmed means when dropout favored the experimental group, and vice versa. The trimmed mean estimates a unique estimand. Therefore, comparisons with other methods are difficult to interpret and the utility of the trimmed mean hinges on the reasonableness of its assumptions: dropout is an equally bad outcome in all patients, and adherence decisions in the trial are sufficiently similar to clinical practice in order to generalize the results. Trimming might be applicable to other inter‐current events such as switching to or adding rescue medicine. Given the well‐known biases in some methods that estimate effectiveness, such as baseline observation carried forward and non‐responder imputation, the trimmed mean may be a useful alternative when its assumptions are justifiable.  相似文献   

6.
The N-mixture model proposed by Royle in 2004 may be used to approximate the abundance and detection probability of animal species in a given region. In 2006, Royle and Dorazio discussed the advantages of using a Bayesian approach in modelling animal abundance and occurrence using a hierarchical N-mixture model. N-mixture models assume replication on sampling sites, an assumption that may be violated when the site is not closed to changes in abundance during the survey period or when nominal replicates are defined spatially. In this paper, we studied the robustness of a Bayesian approach to fitting the N-mixture model for pseudo-replicated count data. Our simulation results showed that the Bayesian estimates for abundance and detection probability are slightly biased when the actual detection probability is small and are sensitive to the presence of extra variability within local sites.  相似文献   

7.
The model of independent competing risks provides no information for the assessment of competing failure modes if the failure mechanisms underlying these modes are coupled. Models for dependent competing risks in the literature can be distinguished on the basis of the functional behaviour of the conditional probability of failure due to a particular failure mode given that the failure time exceeds a fixed time, as a function of time. There is an interesting link between monotonicity of such conditional probability and dependence between failure time and failure mode, via crude hazard rates. In this paper, we propose tests for testing the dependence between failure time and failure mode using the crude hazards and using the conditional probabilities mentioned above. We establish the equivalence between the two approaches and provide an asymptotically efficient weight function under a sequence of local alternatives. The tests are applied to simulated data and to mortality follow-up data.  相似文献   

8.
Time series are often affected by interventions such as strikes, earthquakes, or policy changes. In the current paper, we build a practical nonparametric intervention model using the central mean subspace in time series. We estimate the central mean subspace for time series taking into account known interventions by using the Nadaraya–Watson kernel estimator. We use the modified Bayesian information criterion to estimate the unknown lag and dimension. Finally, we demonstrate that this nonparametric approach for intervened time series performs well in simulations and in a real data analysis such as the Monthly average of the oxidant.  相似文献   

9.

The linear mixed-effects model (Verbeke and Molenberghs, 2000) has become a standard tool for the analysis of continuous hierarchical data such as, for example, repeated measures or data from meta-analyses. However, in certain situations the model does pose insurmountable computational problems. Precisely this has been the experience of Buyse et al. (2000a) who proposed an estimation- and prediction-based approach for evaluating surrogate endpoints. Their approach requires fitting linear mixed models to data from several clinical trials. In doing so, these authors built on the earlier, single-trial based, work by Prentice (1989), Freedman et al. (1992), and Buyse and Molenberghs (1998). While Buyse et al. (2000a) claim their approach has a number of advantages over the classical single-trial methods, a solution needs to be found for the computational complexity of the corresponding linear mixed model. In this paper, we propose and study a number of possible simplifications. This is done by means of a simulation study and by applying the various strategies to data from three clinical studies: Pharmacological Therapy for Macular Degeneration Study Group (1977), Ovarian Cancer Meta-analysis Project (1991) and Corfu-A Study Group (1995).  相似文献   

10.
In this paper, we develop a simple nonparametric test for testing the independence of time to failure and cause of failure in competing risks set up. We generalise the test to the situation where failure data is right censored. We obtain the asymptotic distribution of the test statistics for complete and censored data. The efficiency loss due to censoring is studied using Pitman efficiency. The performance of the proposed test is evaluated through simulations. Finally we illustrate our test procedure using three real data sets.  相似文献   

11.
Two common features of clinical trials, and other longitudinal studies, are (1) a primary interest in composite endpoints, and (2) the problem of subjects withdrawing prematurely from the study. In some settings, withdrawal may only affect observation of some components of the composite endpoint, for example when another component is death, information on which may be available from a national registry. In this paper, we use the theory of augmented inverse probability weighted estimating equations to show how such partial information on the composite endpoint for subjects who withdraw from the study can be incorporated in a principled way into the estimation of the distribution of time to composite endpoint, typically leading to increased efficiency without relying on additional assumptions above those that would be made by standard approaches. We describe our proposed approach theoretically, and demonstrate its properties in a simulation study.  相似文献   

12.
We introduce two types of ordinal pattern dependence between time series. Positive (resp. negative) ordinal pattern dependence can be seen as a non-paramatric and in particular non-linear counterpart to positive (resp. negative) correlation. We show in an explorative study that both types of this dependence show up in real world financial data.  相似文献   

13.
Time series which have more than one time dependent variable require building an appropriate model in which the variables not only have relationships with each other, but also depend on previous values in time. Based on developments for a sufficient dimension reduction, we investigate a new class of multiple time series models without parametric assumptions. First, for the dependent and independent time series, we simply use a univariate time series central subspace to estimate the autoregressive lags of the series. Secondly, we extract the successive directions to estimate the time series central subspace for regressors which include past lags of dependent and independent series in a mutual information multiple-index time series. Lastly, we estimate a multiple time series model for the reduced directions. In this article, we propose a unified estimation method of minimal dimension using an Akaike information criterion, for situations in which the dimension for multiple regressors is unknown. We present an analysis using real data from the housing price index showing that our approach is an alternative for multiple time series modeling. In addition, we check the accuracy for the multiple time series central subspace method using three simulated data sets.  相似文献   

14.
A. Wong 《Statistical Papers》1995,36(1):253-264
The problem of predicting a future observation based on an observed sample is discussed. As a device for eliminating the parameter from the conditional distribution of a future observation given the observed sample, we suggest averaging with respect to an exact or approximate confidence distribution function. It is shown that in many standard problems where an exact answer is available by other methods, the averaging method reproduces that exact answer. When the exact answer is not easily available, the averaging method gives a simple and systematic approach to the problems. Applications to life data from exponential distribution and regression problems are examined.  相似文献   

15.
This paper deals with the construction of the life table. A discussion of basic facts about the life table is followed by the proposal of a nonstationary, autoregressive model for the life table. The moment structure of the nonstationary, autoregressive model is developed. Some estimation procedures are proposed followed by several examples.  相似文献   

16.
In many disease areas, commonly used long-term clinical endpoints are becoming increasingly difficult to implement due to long follow-up times and/or increased costs. Shorter-term surrogate endpoints are urgently needed to expedite drug development, the evaluation of which requires robust and reliable statistical methodology to drive meaningful clinical conclusions about the strength of relationship with the true long-term endpoint. This paper uses a simulation study to explore one such previously proposed method, based on information theory, for evaluation of time to event surrogate and long-term endpoints, including the first examination within a meta-analytic setting of multiple clinical trials with such endpoints. The performance of the information theory method is examined for various scenarios including different dependence structures, surrogate endpoints, censoring mechanisms, treatment effects, trial and sample sizes, and for surrogate and true endpoints with a natural time-ordering. Results allow us to conclude that, contrary to some findings in the literature, the approach provides estimates of surrogacy that may be substantially lower than the true relationship between surrogate and true endpoints, and rarely reach a level that would enable confidence in the strength of a given surrogate endpoint. As a result, care is needed in the assessment of time to event surrogate and true endpoints based only on this methodology.  相似文献   

17.
The use of quantitative variables for risk assessment suffers from the lack of a clear-cut definition of risk. The proposals of Chen and Gaylor (1992) and of Kodell and West (1993) showed a way out of this dilemma. Additional risk is defined as the increase in probability of being in an abnormal state for an exposed individual. In this paper we show how this approach can be generalized to situations where an additional source of variability, often called litter effect, is present. This occurs often in studies on teratogenicity. The coverage of confidence bounds on the additional risk is shown to be sufficient using a small simulation study.  相似文献   

18.
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.  相似文献   

19.
For a number of reasons, surrogate endpoints are considered instead of the so-called true endpoint in clinical studies, especially when such endpoints can be measured earlier, and/or with less burden for patient and experimenter. Surrogate endpoints may occur more frequently than their standard counterparts. For these reasons, it is not surprising that the use of surrogate endpoints in clinical practice is increasing.  相似文献   

20.
Empirical Bayes methods are used in estimating the probability based on randomly right-censored samples. The estimator is shown to be asymptotically optimal. Thus, in a way, this uork is similar to the results of Hollander and Korwar (1976) who used a similar approach in estimating A in the case of non-censored data. We also give hero a shorter proof to their rate result. In addition, a. resting procedure is obtained to test the hypothesis against on the basis of censored data. It is shown that this procedure is asymptotically optimal with rate of convergence n . Tnis result is analogous to our earlier result for the uncensored case (1970) The empirical Hayes procedure has been illustrated by means of a practical example.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号