首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
This paper is about the graphical depiction of the set of feasible gains-in-utilities accruing to three Bayesians involved in the joint estimation of a multivatiate normal mean vector. The basic theory for this problem is sketched. Then a suitable parametrization of surface contours is developed. These contours allow the surface to be mapped and graphically displayed. This is done with the LIG language for interactive graphics. As the opinions of the three Bayesians diverge, the illustrations contained in the paper show how the initially smooth balloon shaped structure develops and ‘clicks’ through a flat spot and eventually becomes highly irregular with a central indentation. The result provides insight into the nature of disensus where explicit mathematical analysis is extremely difficult.  相似文献   

2.
If a crossover design with more than two treatments is carryover balanced, then the usual randomization of experimental units and periods would destroy the neighbour structure of the design. As an alternative, Bailey [1985. Restricted randomization for neighbour-balanced designs. Statist. Decisions Suppl. 2, 237–248] considered randomization of experimental units and of treatment labels, which leaves the neighbour structure intact. She has shown that, if there are no carryover effects, this randomization validates the row–column model, provided the starting design is a generalized Latin square. We extend this result to generalized Youden designs where either the number of experimental units is a multiple of the number of treatments or the number of periods is equal to the number of treatments. For the situation when there are carryover effects we show for so-called totally balanced designs that the variance of the estimates of treatment differences does not change in the presence of carryover effects, while the estimated variance of this estimate becomes conservative.  相似文献   

3.
Multiplicities are ubiquitous. They threaten every inference in every aspect of life. Despite the focus in statistics on multiplicities, statisticians underestimate their importance. One reason is that the focus is on methodology for known multiplicities. Silent multiplicities are much more important and they are insidious. Both frequentists and Bayesians have important contributions to make regarding problems of multiplicities. But neither group has an inside track. Frequentists and Bayesians working together is a promising way of making inroads into this knotty set of problems. Two experiments with identical results may well lead to very different statistical conclusions. So we will never be able to use a software package with default settings to resolve all problems of multiplicities. Every problem has unique aspects. And all problems require understanding the substantive area of application.  相似文献   

4.

The problem of comparing several samples to decide whether the means and/or variances are significantly different is considered. It is shown that with very non-normal distributions even a very robust test to compare the means has poor properties when the distributions have different variances, and therefore a new testing scheme is proposed. This starts by using an exact randomization test for any significant difference (in means or variances) between the samples. If a non-significant result is obtained then testing stops. Otherwise, an approximate randomization test for mean differences (but allowing for variance differences) is carried out, together with a bootstrap procedure to assess whether this test is reliable. A randomization version of Levene's test is also carried out for differences in variation between samples. The five possible conclusions are then that (i) there is no evidence of any differences, (ii) evidence for mean differences only, (iii) evidence for variance differences only, (iv) evidence for mean and variance differences, or (v) evidence for some indeterminate differences. A simulation experiment to assess the properties of the proposed scheme is described. From this it is concluded that the scheme is useful as a robust, conservative method for comparing samples in cases where they may be from very non-normal distributions.  相似文献   

5.
Treatment during cancer clinical trials sometimes involves the combination of multiple drugs. In addition, in recent years there has been a trend toward phase I/II trials in which a phase I and a phase II trial are combined into a single trial to accelerate drug development. Methods for the seamless combination of phases I and II parts are currently under investigation. In the phase II part, adaptive randomization on the basis of patient efficacy outcomes allocates more patients to the dose combinations considered to have higher efficacy. Patient toxicity outcomes are used for determining admissibility to each dose combination and are not used for selection of the dose combination itself. In cases where the objective is not to find the optimum dose combination solely for efficacy but regarding both toxicity and efficacy, the need exists to allocate patients to dose combinations with consideration of the balance of existing trade‐offs between toxicity and efficacy. We propose a Bayesian hierarchical model and an adaptive randomization with consideration for the relationship with toxicity and efficacy. Using the toxicity and efficacy outcomes of patients, the Bayesian hierarchical model is used to estimate the toxicity probability and efficacy probability in each of the dose combinations. Here, we use Bayesian moving‐reference adaptive randomization on the basis of desirability computed from the obtained estimator. Computer simulations suggest that the proposed method will likely recommend a higher percentage of target dose combinations than a previously proposed method.  相似文献   

6.
ABSTRACT

The clinical trials are usually designed with the implicit assumption that data analysis will occur only after the trial is completed. It is a challenging problem if the sponsor wishes to evaluate the drug efficacy in the middle of the study without breaking the randomization codes. In this article, the randomized response model and mixture model are introduced to analyze the data, masking the randomization codes of the crossover design. Given the probability of treatment sequence, the test of mixture model provides higher power than the test of randomized response model, which is inadequate in the example. The paired t-test has higher powers than both models if the randomization codes are broken. The sponsor may stop the trial early to claim the effectiveness of the study drug if the mixture model concludes a positive result.  相似文献   

7.
This paper deals with the analysis of randomization effects in multi‐centre clinical trials. The two randomization schemes most often used in clinical trials are considered: unstratified and centre‐stratified block‐permuted randomization. The prediction of the number of patients randomized to different treatment arms in different regions during the recruitment period accounting for the stochastic nature of the recruitment and effects of multiple centres is investigated. A new analytic approach using a Poisson‐gamma patient recruitment model (patients arrive at different centres according to Poisson processes with rates sampled from a gamma distributed population) and its further extensions is proposed. Closed‐form expressions for corresponding distributions of the predicted number of the patients randomized in different regions are derived. In the case of two treatments, the properties of the total imbalance in the number of patients on treatment arms caused by using centre‐stratified randomization are investigated and for a large number of centres a normal approximation of imbalance is proved. The impact of imbalance on the power of the study is considered. It is shown that the loss of statistical power is practically negligible and can be compensated by a minor increase in sample size. The influence of patient dropout is also investigated. The impact of randomization on predicted drug supply overage is discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
For binary experimental data, we discuss randomization‐based inferential procedures that do not need to invoke any modeling assumptions. In addition to the classical method of moments, we also introduce model‐free likelihood and Bayesian methods based solely on the physical randomization without any hypothetical super population assumptions about the potential outcomes. These estimators have some properties superior to moment‐based ones such as only giving estimates in regions of feasible support. Due to the lack of identification of the causal model, we also propose a sensitivity analysis approach that allows for the characterization of the impact of the association between the potential outcomes on statistical inference.  相似文献   

9.
While randomization inference is well developed for continuous and binary outcomes, there has been comparatively little work for outcomes with nonnegative support and clumping at zero. Typically, outcomes of this type have been modeled using parametric models that impose strong distributional assumptions. This article proposes new randomization inference procedures for nonnegative outcomes with clumping at zero. Instead of making distributional assumptions, we propose various assumptions about the nature of the response to treatment and use permutation inference for both testing and estimation. This approach allows for some natural goodness-of-fit tests for model assessment, as well as flexibility in selecting test statistics sensitive to different potential alternatives. We illustrate our approach using two randomized trials, where job training interventions were designed to increase earnings of participants.  相似文献   

10.
Identifiability has long been an important concept in classical statistical estimation. Historically, Bayesians have been less interested in the concept since, strictly speaking, any parameter having a proper prior distribution also has a proper posterior, and is thus estimable. However, the larger statistical community's recent move toward more Bayesian thinking is largely fueled by an interest in Markov chain Monte Carlo-based analyses using vague or even improper priors. As such, Bayesians have been forced to think more carefully about what has been learned about the parameters of interest (given the data so far), or what could possibly be learned (given an infinite amount of data). In this paper, we propose measures of Bayesian learning based on differences in precision and Kullback–Leibler divergence. After investigating them in the context of some familiar Gaussian linear hierarchical models, we consider their use in a more challenging setting involving two sets of random effects (traditional and spatially arranged), only the sum of which is identified by the data. We illustrate this latter model with an example from periodontal data analysis, where the spatial aspect arises from the proximity of various measurements taken in the mouth. Our results suggest our measures behave sensibly and may be useful in even more complicated (e.g., non-Gaussian) model settings.  相似文献   

11.
ABSTRACT

In this article we evaluate the performance of a randomization test for a subset of regression coefficients in a linear model. This randomization test is based on random permutations of the independent variables. It is shown that the method maintains its level of significance, except for extreme situations, and has power that approximates the power of another randomization test, which is based on the permutation of residuals from the reduced model. We also show, via an example, that the method of permuting independent variables is more valuable than other randomization methods because it can be used in connection with the downweighting of outliers.  相似文献   

12.
Many experiments aim at populations with persons nested within clusters. Randomization to treatment conditions can be done at the cluster level or at the person level within each cluster. The latter may result in control group contamination, and cluster randomization is therefore oftenpreferred in practice. This article models the control group contamination, calculates the required sample sizes for both levels of randomization, and gives the degree of contamination for which cluster randomization is preferable above randomization of persons within clusters. Moreover, itprovides examples of situations where one has to make a choice between both levels of randomization.  相似文献   

13.
Re‐randomization test has been considered as a robust alternative to the traditional population model‐based methods for analyzing randomized clinical trials. This is especially so when the clinical trials are randomized according to minimization, which is a popular covariate‐adaptive randomization method for ensuring balance among prognostic factors. Among various re‐randomization tests, fixed‐entry‐order re‐randomization is advocated as an effective strategy when a temporal trend is suspected. Yet when the minimization is applied to trials with unequal allocation, fixed‐entry‐order re‐randomization test is biased and thus compromised in power. We find that the bias is due to non‐uniform re‐allocation probabilities incurred by the re‐randomization in this case. We therefore propose a weighted fixed‐entry‐order re‐randomization test to overcome the bias. The performance of the new test was investigated in simulation studies that mimic the settings of a real clinical trial. The weighted re‐randomization test was found to work well in the scenarios investigated including the presence of a strong temporal trend. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
In a clinical trial to compare two treatments, subjects may be allocated sequentially to treatment groups by a restricted randomization rule. Suppose that at the end of the trial, the investigator is interested in a post-stratified or subgroup analysis with respect to a particular demographic or clinical factor which was not selected prior to the trial for stratified randomization. Under a randomization model, large sample theory of two-sample post-stratified permutational tests is developed with a broad class of restricted randomization treatment allocation rules. The test procedures proposed here are illustrated with a real-life example. The results of this example indicate that it is not always possible to ignore the treatment rule used in the trial in the design-based analysis.  相似文献   

15.
It is suggested that, whenever possible, an experiment be run in a completely randomized fashion. One reason for randomizing Is to protect against violations in the usual linear model assump¬tions. The protection has always been argued on qualitative grounds. This paper quantitatively demonstrates the protection by hypothesizing models in violation of the usual assumptions, mathe¬matically representing the physical act of randomization, and algebraically deriving expected mean squares, EMS, and F tests. It is shown that randomization offers considerable but not com¬plete protection against model violations.

The same methodology is also applied to blocked experiments, i.e. to experiments performed under a specific type of incomplete randomization commonly referred to as blocking. It is shown that blocking offers little protection against certain model viola¬tions. The common practice of representing blocks as a treatment factor applied to the experimental units approximates the form of the EMS derived under the violated assumptions model.  相似文献   

16.
Summary. An advantage of randomization tests for small samples is that an exact P -value can be computed under an additive model. A disadvantage with very small sample sizes is that the resulting discrete distribution for P -values can make it mathematically impossible for a P -value to attain a particular degree of significance. We investigate a distribution of P -values that arises when several thousand randomization tests are conducted simultaneously using small samples, a situation that arises with microarray gene expression data. We show that the distribution yields valuable information regarding groups of genes that are differentially expressed between two groups: a treatment group and a control group. This distribution helps to categorize genes with varying degrees of overlap of genetic expression values between the two groups, and it helps to quantify the degree of overlap by using the P -value from a randomization test. Moreover, a statistical test is available that compares the actual distribution of P -values with an expected distribution if there are no genes that are differentially expressed. We demonstrate the method and illustrate the results by using a microarray data set involving a cell line for rheumatoid arthritis. A small simulation study evaluates the effect that correlated gene expression levels could have on results from the analysis.  相似文献   

17.
ABSTRACT

The randomized response technique is an effective survey method designed to elicit sensitive information while ensuring the privacy of the respondents. In this article, we present some new results on the randomization response model in situations wherein one or two response variables are assumed to follow a multinomial distribution. For a single sensitive question, we use the well-known Hopkins randomization device to derive estimates, both under the assumption of truthful and untruthful responses, and present a technique for making pairwise comparisons. When there are two sensitive questions of interest, we derive a Pearson product moment correlation estimator based on the multinomial model assumption. This estimator may be used to quantify the linear relationship between two variables when multinomial response data are observed according to a randomized-response protocol.  相似文献   

18.
Summary.  A controversial topic in obstetrics is the effect of walking on the probability of Caesarean section among women in labour. A major reason for the controversy is the presence of non-compliance that complicates the estimation of efficacy, the effect of treatment received on outcome. The intent-to-treat method does not estimate efficacy, and estimates of efficacy that are based directly on treatment received may be biased because they are not protected by randomization. However, when non-compliance occurs immediately after randomization, the use of a potential outcomes model with reasonable assumptions has made it possible to estimate efficacy and still to retain the benefits of randomization to avoid selection bias. In this obstetrics application, non-compliance occurs initially and later in one arm. Consequently some parameters cannot be uniquely estimated without making strong assumptions. This difficulty is circumvented by a new study design involving an additional randomization group and a novel potential outcomes model (principal stratification).  相似文献   

19.
Random assignment of experimental units to treatment and control groups is a conventional device tob create unbiased comparisons. However, when sample sizes are small and the units differ considerably, there is a significant risk that randomization will create seriously unbalanced partitions of the units into treatment and control groups. We develop and evaluate an alternative to complete randomization for small-sample comparisons involving ordinal data with partial information on ranks of units. For instance, we might know that, of eight units, Rank (A) < Rank (C), Rank (A) < Rank (E) and Rank(D) < Rank(H). We develop an efficient computational procedure to use such information as the basis for restricted randomization of units to the treatment group. We compare our methods to complete randomization in the context of the Mann-Whitney test. With sufficient ranking information, the restricted randomization results in more powerful comparisons.  相似文献   

20.
Tests of significance are often made in situations where the standard assumptions underlying the probability calculations do not hold. As a result, the reported significance levels become difficult to interpret. This article sketches an alternative interpretation of a reported significance level, valid in considerable generality. This level locates the given data set within the spectrum of other data sets derived from the given one by an appropriate class of transformations. If the null hypothesis being tested holds, the derived data sets should be equivalent to the original one. Thus, a small reported significance level indicates an unusual data set. This development parallels that of randomization tests, but there is a crucial technical difference: our approach involves permuting observed residuals; the classical randomization approach involves permuting unobservable, or perhaps nonexistent, stochastic disturbance terms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号