首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 382 毫秒
1.
Various strategies are investigated for selecting one of two medical treatments when patients may be divided into k ≥ 2 categories on the basis of their expected differences in response to the two treatments. The strategies compared are (1) k independent decisions, (2) a single overall decision made on the basis of simple pooling, and (3) the Bayes strategy. The optimal clinical trial size is supplied for each strategy, and conditions delineated under which each of the strategies is to be preferred.  相似文献   

2.
Consider the problem of comparing the success rates of c treatments, each of which induce a Bernoulli response. The comparison is to be made on the basis of n matched samples. We present a method for deriving confidence intervals for the pair-wise difference in success rates which has the desirable quality of providing uniformly shorter intervals than the procedure recently proposed by Bhapkar and Somes (1976). Comparisons of the lengths of the respective intervals are provided. Some observations regarding the assumptions required for the use of Cochran’s Q-test (1950) are also made.  相似文献   

3.
In the method of paired comparisons (PCs), treatments are compared on the basis of qualitative characteristics they possess, in the light of their sensory evaluations made by judges. However, there may emerge the situations where in addition to qualitative merits/worths, judges may assign quantitative weights to reflect/specify the relative importance of the treatments. In this study, an attempt is made to reconcile the qualitative and the quantitative PCs through assigning quantitative weights to treatments having qualitative merits using/extending the Bradley–Terry (BT) model. Behaviors of the existing BT model and the proposed weighted BT model are studied through the test of goodness-of-fit. Experimental and simulated data sets are used for illustration.  相似文献   

4.
Plotting of log−log survival functions against time for different categories or combinations of categories of covariates is perhaps the easiest and most commonly used graphical tool for checking proportional hazards (PH) assumption. One problem in the utilization of the technique is that the covariates need to be categorical or made categorical through appropriate grouping of the continuous covariates. Subjectivity in the decision making on the basis of eye-judgment of the plots and frequent inconclusiveness arising in situations where the number of categories and/or covariates gets larger are among other limitations of this technique. This paper proposes a non-graphical (numerical) test of the PH assumption that makes use of log−log survival function. The test enables checking proportionality for categorical as well as continuous covariates and overcomes the other limitations of the graphical method. Observed power and size of the test are compared to some other tests of its kind through simulation experiments. Simulations demonstrate that the proposed test is more powerful than some of the most sensitive tests in the literature in a wide range of survival situations. An example of the test is given using the widely used gastric cancer data.  相似文献   

5.
Research involving a clinical intervention is normally aimed at testing the treatment effects on a dependent variable, which is assumed to be a relevant indicator of health or quality-of-life status. In much clinical research large-n trials are in fact impractical because the availability of individuals within well-defined categories is limited in this application field. This makes it more and more important to concentrate on single-case experiments. The goal with these is to investigate the presence of a difference in the effect of the treatments considered in the study. In this setting, valid inference generally cannot be made using the parametric statistical procedures that are typically used for the analysis of clinical trials and other large-n designs. Hence, nonparametric tools can be a valid alternative to analyze this kind of data. We propose a permutation solution to assess treatment effects in single-case experiments within alternation designs. An extension to the case of more than two treatments is also presented. A simulation study shows that the approach is both reliable under the null hypothesis and powerful under the alternative, and that it improves the performance of a considered competitor. In the end, we present the results of a real case application.  相似文献   

6.
We investigate a sequential procedure for comparing two treatments in a binomial clinical trial. The procedure uses play-the-winner sampling with termination as soon as the absolute difference in the number of successes of the two treatments reaches a critical value. The important aspect of our procedure is that the critical value is modified as the experiment progresses. Numerical results are given which show that this procedure is preferred to all other existing procedures on the basis of the sample size on the poorer treatment and also on the basis of total sample size.  相似文献   

7.
Nonparametric predictive inference (NPI) is a powerful frequentist statistical framework based only on an exchangeability assumption for future and past observations, made possible by the use of lower and upper probabilities. In this article, NPI is presented for ordinal data, which are categorical data with an ordering of the categories. The method uses a latent variable representation of the observations and categories on the real line. Lower and upper probabilities for events involving the next observation are presented, and briefly compared to NPI for non ordered categorical data. As application, the comparison of multiple groups of ordinal data is presented.  相似文献   

8.
The treatment sum of squares in the one-way analysis of variance can be expressed in two different ways: as a sum of comparisons between each treatment and the remaining treatments combined, or as a sum of comparisons between the treatments two at a time. When comparisons between treatments are made with the Wilcoxon rank sum statistic, these two expressions lead to two different tests; namely, that of Kruskal and Wallis and one which is essentially the same as that proposed by Crouse (1961,1966). The latter statistic is known to be asymptotically distributed as a chi-squared variable when the numbers of replicates are large. Here it is shown to be asymptotically normal when the replicates are few but the number of treatments is large. For all combinations of numbers of replicates and treatments its empirical distribution is well approximated by a beta distribution  相似文献   

9.
A common problem in medical statistics is the discrimination between two groups on the basis of diagnostic information. Information on patient characteristics is used to classify individuals into one of two groups: diseased or disease-free. This classification is often with respect to a particular disease. This discrimination has two probabilistic components: (1) the discrimination is not without error, and (2) in many cases the a priori chance of disease can be estimated. Logistic models (Cox 1970; Anderson 1972) provide methods for incorporating both of these components. The a posteriori probability of disease may be estimated for a patient on the basis of both current measurement of patient characteristics and prior information. The parameters of the logistic model may be estimated on the basis of a calibration trial. In practice, not one but several sets of measurements of one characteristic of the patient may be made on a questionable case. These measurements typically are correlated; they are far from independent. How should these correlated measurements be used? This paper presents a method for incorporating several sets of measurements in the classification of a case.  相似文献   

10.
In this study, an attempt has been made to classify the textile fabrics based on the physical properties using statistical multivariate techniques like discriminant analysis and cluster analysis. Initially, the discriminant functions have been constructed for the classification of the three known categories of fabrics made up of polyster, lyocell/viscose and treated-polyster. The classification yielded hundred per cent accuracy. Each of the three different categories of fabrics has been further subjected to the K-means clustering algorithm that yielded three clusters. These clusters are subjected to discriminant analysis which again yielded a 100% correct classification, indicating that the clusters are well separated. The properties of clusters are also investigated with respect to the measurements.  相似文献   

11.
Consider the problem of testing for independence against stochastic order in a 2 × J contingency table, with two treatments and J ordered categories. By conditioning on the margins, the null hypothesis becomes simple. Careful selection of the conditional alternative hypothesis then allows the problem to be formulated as one of a class of problems for which the minimal complete class of admissible tests is known. The exact versions of many common tests, such as t-tests and the Smirnov test, are shown to be inadmissible, and thus the non-randomized versions are overly conservative. The proportional hazards and proportional odds tests are shown to be admissible for a given data set and size. A new result allows a proof of the admissibility of convex hull and adaptive tests for all data sets and sizes.  相似文献   

12.
Multinomial goodness-of-fit tests arise in a diversity of milieu. The long history of the problem has spawned a multitude of asymptotic tests. If the sample size relative to the number of categories is small, the accuracy of these tests is compromised. In that case, an exact test is a prudent option. But such tests are computationally intensive and need efficient algorithms. This paper gives a conceptual overview, and empirical comparisons of two avenues, namely the network and fast Fourier transform (FFT) algorithms, for an exact goodness-of-fit test on a multinomial. We show that a recursive execution of a polynomial product forms the basis of both these approaches. Specific details to implement the network method, and techniques to enhance the efficiency of the FFT algorithm are given. Our empirical comparisons show that for exact analysis with the chi-square and likelihood ratio statistics, the network-cum-polynomial multiplication algorithm is the more efficient and accurate of the two.  相似文献   

13.
Summary.  Treatment of complex diseases such as cancer, leukaemia, acquired immune deficiency syndrome and depression usually follows complex treatment regimes consisting of time varying multiple courses of the same or different treatments. The goal is to achieve the largest overall benefit defined by a common end point such as survival. Adaptive treatment strategy refers to a sequence of treatments that are applied at different stages of therapy based on the individual's history of covariates and intermediate responses to the earlier treatments. However, in many cases treatment assignment depends only on intermediate response and prior treatments. Clinical trials are often designed to compare two or more adaptive treatment strategies. A common approach that is used in these trials is sequential randomization. Patients are randomized on entry into available first-stage treatments and then on the basis of the response to the initial treatments are randomized to second-stage treatments, and so on. The analysis often ignores this feature of randomization and frequently conducts separate analysis for each stage. Recent literature suggested several semiparametric and Bayesian methods for inference related to adaptive treatment strategies from sequentially randomized trials. We develop a parametric approach using mixture distributions to model the survival times under different adaptive treatment strategies. We show that the estimators proposed are asymptotically unbiased and can be easily implemented by using existing routines in statistical software packages.  相似文献   

14.
A change-over model with correlated errors is discussed. In particular the results of Patterson (1950) and Lucas (1951) for balanced change-over designs are extended to more than four treatments and more general variance matrices. A connection with recent work on neighbour models for field trials is made.  相似文献   

15.
The authors propose a class of statistics based on Rao's score for the sequential testing of composite hypotheses comparing two treatments (populations). Asymptotic approximations of the statistics lead them to propose sequential tests and to derive their monitoring boundaries. As special cases, they construct sequential versions of the two‐sample t‐test for normal populations and two‐sample z‐score tests for binomial populations. The proposed algorithms are simple and easy to compute, as no numerical integration is required. Furthermore, the user can analyze the data at any time regardless of how many inspections have been made. Monte Carlo simulations allow the authors to compare the power and the average stopping time (also known as average sample number) of the proposed tests to those of nonsequential and group sequential tests. A two‐armed comparative clinical trial in patients with adult leukemia allows them to illustrate the efficiency of their methods in the case of binary responses.  相似文献   

16.
This paper considers the problem of estimation when one of a number of populations, assumed normal with known common variance, is selected on the basis of it having the largest observed mean. Conditional on selection of the population, the observed mean is a biased estimate of the true mean. This problem arises in the analysis of clinical trials in which selection is made between a number of experimental treatments that are compared with each other either with or without an additional control treatment. Attempts to obtain approximately unbiased estimates in this setting have been proposed by Shen [2001. An improved method of evaluating drug effect in a multiple dose clinical trial. Statist. Medicine 20, 1913–1929] and Stallard and Todd [2005. Point estimates and confidence regions for sequential trials involving selection. J. Statist. Plann. Inference 135, 402–419]. This paper explores the problem in the simple setting in which two experimental treatments are compared in a single analysis. It is shown that in this case the estimate of Stallard and Todd is the maximum-likelihood estimate (m.l.e.), and this is compared with the estimate proposed by Shen. In particular, it is shown that the m.l.e. has infinite expectation whatever the true value of the mean being estimated. We show that there is no conditionally unbiased estimator, and propose a new family of approximately conditionally unbiased estimators, comparing these with the estimators suggested by Shen.  相似文献   

17.
Confidence intervals for location parameters are expanded (in either direction) to some “crucial” points and the resulting increase in the confidence coefficient investigated.Particaular crucial points are chosen to illuminate some hypothesis testing problems.Special results are dervied for the normal distribution with estimated variance and, in particular, for the problem of classifiying treatments as better or worse than a control.For this problem the usual two-sided Dunnett procedure is seen to be inefficient.Suggestions are made for the use of already published tables for this problem.Mention is made of the use of expanded confidence intervals for all pairwise comparisons of treatments using an “honest ordering difference” rather than Tukey's “honest siginificant difference”.  相似文献   

18.
Frequently in clinical and epidemiologic studies, the event of interest is recurrent (i.e., can occur more than once per subject). When the events are not of the same type, an analysis which accounts for the fact that events fall into different categories will often be more informative. Often, however, although event times may always be known, information through which events are categorized may potentially be missing. Complete‐case methods (whose application may require, for example, that events be censored when their category cannot be determined) are valid only when event categories are missing completely at random. This assumption is rather restrictive. The authors propose two multiple imputation methods for analyzing multiple‐category recurrent event data under the proportional means/rates model. The use of a proper or improper imputation technique distinguishes the two approaches. Both methods lead to consistent estimation of regression parameters even when the missingness of event categories depends on covariates. The authors derive the asymptotic properties of the estimators and examine their behaviour in finite samples through simulation. They illustrate their approach using data from an international study on dialysis.  相似文献   

19.
A general approach for comparing designs of paired comparison experiments on the basis of the asymptotic relative efficiencies, in the Bahadur sense, of their respective likelihood ratio tests is discussed and extended to factorials. Explicit results for comparing five designs of 2q factorial paired comparison experiments are obtained. These results indicate that some of the designs which require comparison of fewer distinct pairs of treatments than does the completely balanced design are, generally, more efficient for detecting main effects and/or certain interactions. The developments of this paper generalize the work of Littell and Boyett (1977) for comparing two designs of R x C factorial paired comparison experiments.  相似文献   

20.
Seamless phase II/III clinical trials are conducted in two stages with treatment selection at the first stage. In the first stage, patients are randomized to a control or one of k > 1 experimental treatments. At the end of this stage, interim data are analysed, and a decision is made concerning which experimental treatment should continue to the second stage. If the primary endpoint is observable only after some period of follow‐up, at the interim analysis data may be available on some early outcome on a larger number of patients than those for whom the primary endpoint is available. These early endpoint data can thus be used for treatment selection. For two previously proposed approaches, the power has been shown to be greater for one or other method depending on the true treatment effects and correlations. We propose a new approach that builds on the previously proposed approaches and uses data available at the interim analysis to estimate these parameters and then, on the basis of these estimates, chooses the treatment selection method with the highest probability of correctly selecting the most effective treatment. This method is shown to perform well compared with the two previously described methods for a wide range of true parameter values. In most cases, the performance of the new method is either similar to or, in some cases, better than either of the two previously proposed methods. © 2014 The Authors. Pharmaceutical Statistics published by John Wiley & Sons Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号