首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The problem of building bootstrap confidence intervals for small probabilities with count data is addressed. The law of the independent observations is assumed to be a mixture of a given family of power series distributions. The mixing distribution is estimated by nonparametric maximum likelihood and the corresponding mixture is used for resampling. We build percentile-t and Efron percentile bootstrap confidence intervals for the probabilities and we prove their consistency in probability. The new theoretical results are supported by simulation experiments for Poisson and geometric mixtures. We compare percentile-t and Efron percentile bootstrap intervals with eight other bootstrap or asymptotic theory based intervals. It appears that Efron percentile bootstrap intervals outperform the competitors in terms of coverage probability and length.  相似文献   

2.
3.
Single-case experiments are frequently used to do research involving a clinical intervention, since large-n trials are often impractical in clinical research. In order to investigate a possible difference in the effect of the treatments considered in the study, nonparametric instruments are valid tools; in particular, permutation solutions work well when we wish to assess differences in treatment effects. We present an extension of a permutation solution to the multivariate response case and to the case of replicated single-case experiments. A simulation study shows that the approach is both reliable under the null hypothesis and powerful under the alternative. At the end, we present the results of an application to two real experiments.  相似文献   

4.
In some randomized (drug versus placebo) clinical trials, the estimand of interest is the between‐treatment difference in population means of a clinical endpoint that is free from the confounding effects of “rescue” medication (e.g., HbA1c change from baseline at 24 weeks that would be observed without rescue medication regardless of whether or when the assigned treatment was discontinued). In such settings, a missing data problem arises if some patients prematurely discontinue from the trial or initiate rescue medication while in the trial, the latter necessitating the discarding of post‐rescue data. We caution that the commonly used mixed‐effects model repeated measures analysis with the embedded missing at random assumption can deliver an exaggerated estimate of the aforementioned estimand of interest. This happens, in part, due to implicit imputation of an overly optimistic mean for “dropouts” (i.e., patients with missing endpoint data of interest) in the drug arm. We propose an alternative approach in which the missing mean for the drug arm dropouts is explicitly replaced with either the estimated mean of the entire endpoint distribution under placebo (primary analysis) or a sequence of increasingly more conservative means within a tipping point framework (sensitivity analysis); patient‐level imputation is not required. A supplemental “dropout = failure” analysis is considered in which a common poor outcome is imputed for all dropouts followed by a between‐treatment comparison using quantile regression. All analyses address the same estimand and can adjust for baseline covariates. Three examples and simulation results are used to support our recommendations.  相似文献   

5.
We introduce distribution-free permutation tests and corresponding estimates for studying the effect of a treatment variable x on a response y. The methods apply in the presence of a multivariate covariate z. They are based on the assumption that the treatment values are assigned randomly to the subjects.  相似文献   

6.
Intention‐to‐treat (ITT) analysis is widely used to establish efficacy in randomized clinical trials. However, in a long‐term outcomes study where non‐adherence to study drug is substantial, the on‐treatment effect of the study drug may be underestimated using the ITT analysis. The analyses presented herein are from the EVOLVE trial, a double‐blind, placebo‐controlled, event‐driven cardiovascular outcomes study conducted to assess whether a treatment regimen including cinacalcet compared with placebo in addition to other conventional therapies reduces the risk of mortality and major cardiovascular events in patients receiving hemodialysis with secondary hyperparathyroidism. Pre‐specified sensitivity analyses were performed to assess the impact of non‐adherence on the estimated effect of cinacalcet. These analyses included lag‐censoring, inverse probability of censoring weights (IPCW), rank preserving structural failure time model (RPSFTM) and iterative parameter estimation (IPE). The relative hazard (cinacalcet versus placebo) of mortality and major cardiovascular events was 0.93 (95% confidence interval 0.85, 1.02) using the ITT analysis; 0.85 (0.76, 0.95) using lag‐censoring analysis; 0.81 (0.70, 0.92) using IPCW; 0.85 (0.66, 1.04) using RPSFTM and 0.85 (0.75, 0.96) using IPE. These analyses, while not providing definitive evidence, suggest that the intervention may have an effect while subjects are receiving treatment. The ITT method remains the established method to evaluate efficacy of a new treatment; however, additional analyses should be considered to assess the on‐treatment effect when substantial non‐adherence to study drug is expected or observed. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
The theoretical literature on quantile and distribution function estimation in infinite populations is very rich, and invariance plays an important role in these studies. This is not the case for the commonly occurring problem of estimation of quantiles in finite populations. The latter is more complicated and interesting because an optimal strategy consists not only of an estimator, but also of a sampling design, and the estimator may depend on the design and on the labels of sampled individuals, whereas in iid sampling, design issues and labels do not exist.We study the estimation of finite population quantiles, with emphasis on estimators that are invariant under the group of monotone transformations of the data, and suitable invariant loss functions. Invariance under the finite group of permutation of the sample is also considered. We discuss nonrandomized and randomized estimators, best invariant and minimax estimators, and sampling strategies relative to different classes. Invariant loss functions and estimators in finite population sampling have a nonparametric flavor, and various natural combinatorial questions and tools arise as a result.  相似文献   

8.
Summary.  In magazine advertisements for new drugs, it is common to see summary tables that compare the relative frequency of several side-effects for the drug and for a placebo, based on results from placebo-controlled clinical trials. The paper summarizes ways to conduct a global test of equality of the population proportions for the drug and the vector of population proportions for the placebo. For multivariate normal responses, the Hotelling T 2-test is a well-known method for testing equality of a vector of means for two independent samples. The tests in the paper are analogues of this test for vectors of binary responses. The likelihood ratio tests can be computationally intensive or have poor asymptotic performance. Simple quadratic forms comparing the two vectors provide alternative tests. Much better performance results from using a score-type version with a null-estimated covariance matrix than from the sample covariance matrix that applies with an ordinary Wald test. For either type of statistic, asymptotic inference is often inadequate, so we also present alternative, exact permutation tests. Follow-up inferences are also discussed, and our methods are applied to safety data from a phase II clinical trial.  相似文献   

9.
A longitudinal mixture model for classifying patients into responders and non‐responders is established using both likelihood‐based and Bayesian approaches. The model takes into consideration responders in the control group. Therefore, it is especially useful in situations where the placebo response is strong, or in equivalence trials where the drug in development is compared with a standard treatment. Under our model, a treatment shows evidence of being effective if it increases the proportion of responders or increases the response rate among responders in the treated group compared with the control group. Therefore, the model has flexibility to accommodate different situations. The proposed method is illustrated using simulation and a depression clinical trial dataset for the likelihood‐based approach, and the same depression clinical trial dataset for the Bayesian approach. The likelihood‐based and Bayesian approaches generated consistent results for the depression trial data. In both the placebo group and the treated group, patients are classified into two components with distinct response rate. The proportion of responders is shown to be significantly higher in the treated group compared with the control group, suggesting the treatment paroxetine is effective. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
For clinical trials on neurodegenerative diseases such as Parkinson's or Alzheimer's, the distributions of psychometric measures for both placebo and treatment groups are generally skewed because of the characteristics of the diseases. Through an analytical, but computationally intensive, algorithm, we specifically compare power curves between 3- and 7-category ordinal logistic regression models in terms of the probability of detecting the treatment effect, assuming a symmetric distribution or skewed distributions for the placebo group. The proportional odds assumption under the ordinal logistic regression model plays an important role in these comparisons. The results indicate that there is no significant difference in the power curves between 3-category and 7-category response models where a symmetric distribution is assumed for the placebo group. However, when the skewness becomes more extreme for the placebo group, the loss of power can be substantial.  相似文献   

11.
The Bayesian choice of crop variety and fertilizer dose   总被引:1,自引:0,他引:1  
Recent contributions to the theory of optimizing fertilizer doses in agricultural crop production have introduced Bayesian ideas to incorporate information on crop yield from several environments and on soil nutrients from a soil test, but they have not used a fully Bayesian formulation. We present such a formulation and demonstrate how the resulting Bayes decision procedure can be evaluated in practice by using Markov chain Monte Carlo methods. The approach incorporates expert knowledge of the crop and of regional and local soil conditions and allows a choice of crop variety as well as of fertilizer level. Alternative dose–response functions are expressed in terms of a common interpretable set of parameters to facilitate model comparisons and the specification of prior distributions. The approach is illustrated with a set of yield data from spring barley nitrogen–response trials and is found to be robust to changes in the dose–response function and the prior distribution for indigenous soil nitrogen.  相似文献   

12.
We respond to criticism leveled at bootstrap confidence intervals for the correlation coefficient by recent authors by arguing that in the correlation coefficient case, non–standard methods should be employed. We propose two such methods. The first is a bootstrap coverage coorection algorithm using iterated bootstrap techniques (Hall, 1986; Beran, 1987a; Hall and Martin, 1988) applied to ordinary percentile–method intervals (Efron, 1979), giving intervals with high coverage accuracy and stable lengths and endpoints. The simulation study carried out for this method gives results for sample sizes 8, 10, and 12 in three parent populations. The second technique involves the construction of percentile–t bootstrap confidence intervals for a transformed correlation coefficient, followed by an inversion of the transformation, to obtain “transformed percentile–t” intervals for the correlation coefficient. In particular, Fisher's z–transformation is used, and nonparametric delta method and jackknife variance estimates are used to Studentize the transformed correlation coefficient, with the jackknife–Studentized transformed percentile–t interval yielding the better coverage accuracy, in general. Percentile–t intervals constructed without first using the transformation perform very poorly, having large expected lengths and erratically fluctuating endpoints. The simulation study illustrating this technique gives results for sample sizes 10, 15 and 20 in four parent populations. Our techniques provide confidence intervals for the correlation coefficient which have good coverage accuracy (unlike ordinary percentile intervals), and stable lengths and endpoints (unlike ordinary percentile–t intervals).  相似文献   

13.
We extend the random permutation model to obtain the best linear unbiased estimator of a finite population mean accounting for auxiliary variables under simple random sampling without replacement (SRS) or stratified SRS. The proposed method provides a systematic design-based justification for well-known results involving common estimators derived under minimal assumptions that do not require specification of a functional relationship between the response and the auxiliary variables.  相似文献   

14.
This paper discusses a curved exponential family of distributions which is defined by a differential equation with respect to the expectation parameters in the two–dimensional exponential family. The differential equation considered here is the same as the one given by Efron (1975) for the trinomial distribution. This equation is extended here to a general exponential family, and called Efron's parameterization in the two–dimensional exponential family. The solution of Efron's parameterization is obtained explicitly in an exponential family, although Kumagai & Inagaki (1996) showed that there exists no proper solution of Efron's equation for the trinomial distribution, in line with the counterexample given by Efron (1975 p. 1206). The paper gives some characterizations of Efron's parameterization with special reference to Fisher's circle model. The implications of these characterizations are the two–dimensional normal distribution and a spiral curve in the plane.  相似文献   

15.
Under the setting of the columnwise orthogonal polynomial model in the context of the general factorial it is shown that (i) the determinant of the information matrix of a design relative to an admissible vector of effects is invariant under a permutation of levels; (ii) the unbiased estimation of a linear function of an admissible vector of effects can be obtained under equal probability randomization. These results extend the work on invariance and randomization carried out under the more restrictive assumption of the orthonormal polynomial model by Srivastava, Raktoe and Pesotan (1976) and Pesotan and Raktoe (1981). Moreover, the problem of the construction of D-optimal main effect designs in the symmetrical factorial is reduced to a study of a special class of (0,1)-matrices using the Helmat matrix model. Using this class of (0,1)-matrices and the determinant invariance result, some classes of D-optimal main effect designs of the s2 and s3 factorial respectively are presented.  相似文献   

16.
Motivated by practical issues, a new stochastic order for random variables is introduced by comparing all their percentile residual life functions until a certain instant. Some interpretations of these stochastic orders are given, and various properties of them are derived. The relationships to other stochastic orders are studied and also an application in reliability theory is described. Finally, we present some characterization results of the decreasing percentile residual life up to time t0 aging notion.  相似文献   

17.
The Kruskal–Wallis test is a rank–based one way ANOVA. Its test statistic is shown here to be a quadratic form among the Mann–Whitney or Kendall tau concordance measures between pairs of treatments. But the full set of such concordance measures has more degrees of freedom than the Kruskal–Wallis test uses, and the independent surplus is attributable to circularity, or non–transitive effects. The meaning of circularity is well illustrated by Efron dice. The cases of k = 3, 4 treatments are analysed thoroughly in this paper, which also shows how the full sum of squares among all concordance measures can be decomposed into uncorrelated transitive and non–transitive circularity effects. A multiple comparisons procedure based on patterns of transitive orderings among treatments is implemented. The testing of circularities involves non–standard asymptotic distributions. The asymptotic theory is deferred, but Monte Carlo permutation tests are easy to implement.  相似文献   

18.
Without the exchangeability assumption, permutation tests for comparing two population means do not provide exact control of the probability of making a Type I error. Another drawback of permutation tests is that it cannot be used to test hypothesis about one population. In this paper, we propose a new type of permutation tests for testing the difference between two population means: the split sample permutation t-tests. We show that the split sample permutation t-tests do not require the exchangeability assumption, are asymptotically exact and can be easily extended to testing hypothesis about one population. Extensive simulations were carried out to evaluate the performance of two specific split sample permutation t-tests: the split in the middle permutation t-test and the split in the end permutation t-test. The simulation results show that the split in the middle permutation t-test has comparable performance to the permutation test if the population distributions are symmetric and satisfy the exchangeability assumption. Otherwise, the split in the end permutation t-test has significantly more accurate control of level of significance than the split in the middle permutation t-test and other existing permutation tests.  相似文献   

19.
The power of a statistical test depends on the sample size. Moreover, in a randomized trial where two treatments are compared, the power also depends on the number of assignments of each treatment. We can treat the power as the conditional probability of correctly detecting a treatment effect given a particular treatment allocation status. This paper uses a simple z-test and a t-test to demonstrate and analyze the power function under the biased coin design proposed by Efron in 1971. We numerically show that Efron's biased coin design is uniformly more powerful than the perfect simple randomization.  相似文献   

20.
Testing for stochastic ordering is of considerable importance when increasing does of a treatment are being compared, but in applications involving multivariate responses has received much less attention. We propose a permutation test for testing against multivariate stochastic ordering. This test is distribution-free and no assumption is made about the dependence relations among variables. A comparative simulation study shows that the proposed solution exhibits a good overall performance when compared with existing tests that can be used for the same problem.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号