首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We show that assumptions that are sufficient for estimating an average treatment effect in randomized trials with non-compliance restrict the subgroup means for always takers, compliers, defiers and never takers to a two-dimensional linear subspace of a four-dimensional space. Implications and special cases are exemplified.  相似文献   

2.
In this paper, we argue that replacing the expectation of the loss in statistical decision theory with the median of the loss leads to a viable and useful alternative to conventional risk minimization particularly because it can be used with heavy tailed distributions. We investigate three possible definitions for such medloss estimators and derive examples of them in several standard settings. We argue that the medloss definition based on the posterior distribution is better than the other two definitions that do not permit optimization over large classes of estimators. We argue that median loss minimizing estimates often yield improved performance, have resistance to outliers as high as the usual robust estimates, and are resistant to the specific loss used to form them. In simulations with the posterior medloss formulation, we show how the estimates can be obtained numerically and that they can have better robustness properties than estimates derived from risk minimization.  相似文献   

3.
In this study, we propose a median control chart. In order to determine the control limits, we consider using an estimate of the variance of sample median. Also, we consider applying the bootstrap methods. Then we illustrate the proposed median control chart with an example and compare the bootstrap methods by simulation study. Finally, we discuss some peculiar features for the median control chart as concluding remarks.  相似文献   

4.
With the advances in human genomic/genetic studies, the clinical trial community gradually recognizes that phenotypically homogeneous patients may be heterogeneous at the genomic level. The genomic technology brings a possible avenue for developing a genomic (composite) biomarker to predict a genomically responsive patient subset that may have a (much) higher likelihood of benefiting from a treatment. Randomized controlled trial is the mainstay to provide scientifically convincing evidence of a purported effect a new treatment may demonstrate. In conventional clinical trials, the primary clinical hypothesis pertains to the therapeutic effect in all patients who are eligible for the study defined by the primary efficacy endpoint. The aspect of one-size-fits-all surrounding the conventional design has been challenged, particularly when the diseases may be heterogeneous due to observable clinical characteristics and/or unobservable underlying the genomic characteristics. Extension from the conventional single population design objective to an objective that encompasses two possible patient populations will allow more informative evaluation in the patients having different degrees of responsiveness to medication. Building in conventional clinical trials, an additional genomic objective can generate an appealing conceptual framework from the patient's perspective in addressing personalized medicine in well-controlled clinical trials. There are many perceived benefits of personalized medicine that are based on the notion of being genomically proactive in the identification of disease and prevention of disease or recurrence. In this paper, we show that an adaptive design approach can be constructed to study a clinical hypothesis of overall treatment effect and a hypothesis of treatment effect in a genomic subset more efficiently than the conventional non-adaptive approach.  相似文献   

5.
In this article, a Tukey-type method is proposed that will allow simultaneous pairwise comparisons among all pairs of samples associated with Mood's procedure. An example is also provided for illustrative purposes.  相似文献   

6.
Covariate adjustment for the estimation of treatment effect for randomized controlled trials (RCT) is a simple approach with a long history, hence, its pros and cons have been well‐investigated and published in the literature. It is worthwhile to revisit this topic since recently there has been significant investigation and development on model assumptions, robustness to model mis‐specification, in particular, regarding the Neyman‐Rubin model and the average treatment effect estimand. This paper discusses key results of the investigation and development and their practical implication on pharmaceutical statistics. Accordingly, we recommend that appropriate covariate adjustment should be more widely used for RCTs for both hypothesis testing and estimation.  相似文献   

7.
In this article, we have suggested some classes of estimators for estimating finite population median using information on an auxiliary variable. To study the properties of suggested classes of estimators under large sample approximation, a generalized class of estimators has been suggested with its properties. It has been shown that the suggested classes of estimators are more efficient than other existing estimators. The results have been illustrated through an empirical study.  相似文献   

8.
We consider the weighted median problem for a given set of data and analyze its main properties. As an illustration, an efficient method for searching for a weighted Least Absolute Deviations (LAD)-line is given, which is used as the basis for solving various linear and nonlinear LAD-problems occurring in applications. Our method is illustrated by an example of hourly natural gas consumption forecast.  相似文献   

9.
The average effect of the treatment on the treated is a quantity of interest in observational studies in which no definite parameter can be used to quantify the treatment effect, such as those where only a random subset of the data obtained by stratification can be used for analysis. Nonparametric confidence intervals for this quantity appear to be known only in the case where the responses to the treatment are binary and the data fall into a single stratum. We propose nonparametric confidence intervals for the average effect of the treatment on the treated in studies involving one or more strata and general numerical responses.  相似文献   

10.
The standard log-rank test has been extended by adopting various weight functions. Cancer vaccine or immunotherapy trials have shown a delayed onset of effect for the experimental therapy. This is manifested as a delayed separation of the survival curves. This work proposes new weighted log-rank tests to account for such delay. The weight function is motivated by the time-varying hazard ratio between the experimental and the control therapies. We implement a numerical evaluation of the Schoenfeld approximation (NESA) for the mean of the test statistic. The NESA enables us to assess the power and to calculate the sample size for detecting such delayed treatment effect and also for a more general specification of the non-proportional hazards in a trial. We further show a connection between our proposed test and the weighted Cox regression. Then the average hazard ratio using the same weight is obtained as an estimand of the treatment effect. Extensive simulation studies are conducted to compare the performance of the proposed tests with the standard log-rank test and to assess their robustness to model mis-specifications. Our tests outperform the Gρ,γ class in general and have performance close to the optimal test. We demonstrate our methods on two cancer immunotherapy trials.  相似文献   

11.
This article considers likelihood methods for estimating the causal effect of treatment assignment for a two-armed randomized trial assuming all-or-none treatment noncompliance and allowing for subsequent nonresponse. We first derive the observed data likelihood function as a closed form expression of the parameter given the observed data where both response and compliance state are treated as variables with missing values. Then we describe an iterative procedure which maximizes the observed data likelihood function directly to compute a maximum likelihood estimator (MLE) of the causal effect of treatment assignment. Closed form expressions at each iterative step are provided. Finally we compare the MLE with an alternative estimator where the probability distribution of the compliance state is estimated independent of the response and its missingness mechanism. Our work indicates that direct maximum likelihood inference is straightforward for this problem. Extensive simulation studies are provided to examine the finite sample performance of the proposed methods.  相似文献   

12.
In recent years, zero-inflated count data models, such as zero-inflated Poisson (ZIP) models, are widely used as the count data with extra zeros are very common in many practical problems. In order to model the correlated count data which are either clustered or repeated and to assess the effects of continuous covariates or of time scales in a flexible way, a class of semiparametric mixed-effects models for zero-inflated count data is considered. In this article, we propose a fully Bayesian inference for such models based on a data augmentation scheme that reflects both random effects of covariates and mixture of zero-inflated distribution. A computational efficient MCMC method which combines the Gibbs sampler and M-H algorithm is implemented to obtain the estimate of the model parameters. Finally, a simulation study and a real example are used to illustrate the proposed methodologies.  相似文献   

13.
Summary.  For a binary treatment ν =0, 1 and the corresponding 'potential response' Y 0 for the control group ( ν =0) and Y 1 for the treatment group ( ν =1), one definition of no treatment effect is that Y 0 and Y 1 follow the same distribution given a covariate vector X . Koul and Schick have provided a non-parametric test for no distributional effect when the realized response (1− ν ) Y 0+ ν Y 1 is fully observed and the distribution of X is the same across the two groups. This test is thus not applicable to censored responses, nor to non-experimental (i.e. observational) studies that entail different distributions of X across the two groups. We propose ' X -matched' non-parametric tests generalizing the test of Koul and Schick following an idea of Gehan. Our tests are applicable to non-experimental data with randomly censored responses. In addition to these motivations, the tests have several advantages. First, they have the intuitive appeal of comparing all available pairs across the treatment and control groups, instead of selecting a number of matched controls (or treated) in the usual pair or multiple matching. Second, whereas most matching estimators or tests have a non-overlapping support (of X ) problem across the two groups, our tests have a built-in protection against the problem. Third, Gehan's idea allows the tests to make good use of censored observations. A simulation study is conducted, and an empirical illustration for a job training effect on the duration of unemployment is provided.  相似文献   

14.
Summary.  The US case on tying Microsoft Internet Explorer to Windows has received much attention. In Europe, a similar case of tying the Microsoft media player to Windows appeared. Recently in Korea, another similar case of tying a Microsoft messenger to Windows occurred. In the messenger tying case (as well as in the other tying cases), Microsoft's main defence seems to be threefold: tying enhances efficiency, the Microsoft product is better or better marketed and tying is inconsequential because the user can easily download free competing products. The paper empirically addresses the third point. Korean data, used as evidence in the trial of the case, reveal that tying the Microsoft messenger to Windows increased the probability of choosing the Microsoft messenger as the main messenger by 22% for Windows Millennium and 35% for Windows XP. There is also evidence that tying shortened the duration until the Microsoft messenger is adopted by about 2–4 months, compared with the duration until the adoption of a competing messenger. Hence tying provided Microsoft with an almost instant non-trivial advantage in the messenger market 'race'—the advantage derived from the dominant position in the operating system market.  相似文献   

15.
Based on the semiparametric median regression analysis for the right-censored data developed by Ying et al. (1995 Ying , Z. , Jung , S. H. , Wei , L. J. ( 1995 ). Survival analysis with median regression models . J. Amer. Statist. Assoc. 90 : 178184 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]), an empirical likelihood based inferential procedure for the regression coefficients is proposed. The limiting distribution of the proposed log-empirical likelihood ratio test statistic follows a chi-squared distribution, which corresponds to the standard asymptotic results of the empirical likelihood method. The inference about the subsets of the entire regression coefficients vector is discussed. The proposed method is illustrated by some simulation studies.  相似文献   

16.
The indirect mechanism of action of immunotherapy causes a delayed treatment effect, producing delayed separation of survival curves between the treatment groups, and violates the proportional hazards assumption. Therefore using the log‐rank test in immunotherapy trial design could result in a severe loss efficiency. Although few statistical methods are available for immunotherapy trial design that incorporates a delayed treatment effect, recently, Ye and Yu proposed the use of a maximin efficiency robust test (MERT) for the trial design. The MERT is a weighted log‐rank test that puts less weight on early events and full weight after the delayed period. However, the weight function of the MERT involves an unknown function that has to be estimated from historical data. Here, for simplicity, we propose the use of an approximated maximin test, the V0 test, which is the sum of the log‐rank test for the full data set and the log‐rank test for the data beyond the lag time point. The V0 test fully uses the trial data and is more efficient than the log‐rank test when lag exits with relatively little efficiency loss when no lag exists. The sample size formula for the V0 test is derived. Simulations are conducted to compare the performance of the V0 test to the existing tests. A real trial is used to illustrate cancer immunotherapy trial design with delayed treatment effect.  相似文献   

17.
In statistical practice, systematic sampling (SYS) is used in many modifications due to its simple handling. In addition, SYS may provide efficiency gains if it is well adjusted to the structure of the population under study. However, if SYS is based on an inappropriate picture of the population a high decrease of efficiency, i.e. a high increase in variance may result by changing from simple random sampling to SYS. In the context of two-stage designs SYS so far seems often in use for subsampling within the primary units. As an alternative to this practice, we propose to randomize the order of the primary units, then to select systematically a number of primary units and, thereafter, to draw secondary units by simple random sampling without replacement within the primary units selected. This procedure is more efficient than simple random sampling with replacement from the whole population of all secondary units, i.e. the variance of an adequate estimator for a total is never increased by changing from simple random sampling to randomized SYS whatever be the values associated by a characteristic with the secondary units, while there are values for which the variance decreases for the change mentioned. This result should hold generally, even if our proof, so far, is not complete for general sample sizes.  相似文献   

18.
This article considers some classes of estimators of the population median of the study variable using information on an auxiliary variable with their properties under large sample approximation. Asymptotic optimum estimator (AOE) in each class of estimators has been investigated along with the approximate mean square error formulae. It has been shown that the proposed classes of estimators are better than these considered by Gross (1980 Gross , T. S. ( 1980 ). Median estimation in sample surveys. Proc. Surv. Res. Meth. Sect. Amer. Statist. Assoc. 181–184 . [Google Scholar]), Kuk and Mak (1989 Kuk , A. Y. C. , Mak , T. K. ( 1989 ). Median estimation in the presence of auxiliary information . J. Roy. Statist. Soc. Ser. B51 : 261269 . [Google Scholar]), Singh et al. (2003a Singh , H. P. , Singh , S. , Joarder , A. H. ( 2003a ). Estimation of population median when mode of an auxiliary variable is known . J. Statist. Res. 37 ( 1 ): 5763 . [Google Scholar]), and Al and Cingi (2009 Al , S. , Cingi , H. ( 2009 ). New estimators for the population median in simple random sampling. Tenth Islamic Countries Conference on Statistical Sciences, held in New Cairo, Egypt . [Google Scholar]). An empirical study is carried out to judge the merits of the suggested class of estimators over other existing estimators.  相似文献   

19.
Recently, least absolute deviations (LAD) estimator for median regression models with doubly censored data was proposed and the asymptotic normality of the estimator was established. However, it is invalid to make inference on the regression parameter vectors, because the asymptotic covariance matrices are difficult to estimate reliably since they involve conditional densities of error terms. In this article, three methods, which are based on bootstrap, random weighting, and empirical likelihood, respectively, and do not require density estimation, are proposed for making inference for the doubly censored median regression models. Simulations are also done to assess the performance of the proposed methods.  相似文献   

20.
The randomized cluster design is typical in studies where the unit of randomization is a cluster of individuals rather than the individual. Evaluating various intervention strategies across medical care providers at either an institutional level or at a physician group practice level fits the randomized cluster model. Clearly, the analytical approach to such studies must take the unit of randomization and accompanying intraclass correlation into consideration. We review alternative methods to the typical Pearson's chi-square analysis and illustrate these alternatives. We have written and tested a Fortran program that produces the statistics outlined in this paper. The program, in an executable format is available from the author on request.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号