首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Not having a variance estimator is a seriously weak point of a sampling design from a practical perspective. This paper provides unbiased variance estimators for several sampling designs based on inverse sampling, both with and without an adaptive component. It proposes a new design, which is called the general inverse sampling design, that avoids sampling an infeasibly large number of units. The paper provide estimators for this design as well as its adaptive modification. A simple artificial example is used to demonstrate the computations. The adaptive and non‐adaptive designs are compared using simulations based on real data sets. The results indicate that, for appropriate populations, the adaptive version can have a substantial variance reduction compared with the non‐adaptive version. Also, adaptive general inverse sampling with a limitation on the initial sample size has a greater variance reduction than without the limitation.  相似文献   

2.
In phase I trials, the main goal is to identify a maximum tolerated dose under an assumption of monotonicity in dose–response relationships. On the other hand, such monotonicity is no longer applied to biologic agents because a different mode of action from that of cytotoxic agents potentially draws unimodal or flat dose–efficacy curves. Therefore, biologic agents require an optimal dose that provides a sufficient efficacy rate under an acceptable toxicity rate instead of a maximum tolerated dose. Many trials incorporate both toxicity and efficacy data, and drugs with a variety of modes of actions are increasingly being developed; thus, optimal dose estimation designs have been receiving increased attention. Although numerous authors have introduced parametric model-based designs, it is not always appropriate to apply strong assumptions in dose–response relationships. We propose a new design based on a Bayesian optimization framework for identifying optimal doses for biologic agents in phase I/II trials. Our proposed design models dose–response relationships via nonparametric models utilizing a Gaussian process prior, and the uncertainty of estimates is considered in the dose selection process. We compared the operating characteristics of our proposed design against those of three other designs through simulation studies. These include an expansion of Bayesian optimal interval design, the parametric model-based EffTox design, and the isotonic design. In simulations, our proposed design performed well and provided results that were more stable than those from the other designs, in terms of the accuracy of optimal dose estimations and the percentage of correct recommendations.  相似文献   

3.
A method of maximum likelihood estimation of gross flows from overlapping stratified sample data is developed. The approach taken is model-based and the EM algorithm is used to solve the estimation problem. Inference is thus based on information from the total sample at each time period. This can be contrasted with the conventional approach to gross flows estimation which only uses information from the overlapping sub-sample. An application to estimation of flows of Australian cropping and livestock industries farms into and out of an “at risk” situation over the period 1979–84 is presented, as well as a discussion of extensions to more complex sampling situations.  相似文献   

4.
Few topics have stirred as much discussion and controversy as randomization. A reading of the literature suggests that clinical trialists generally feel randomization is necessary for valid inference, while biostatisticians using model-based inference often appear to prefer nearly optimal designs irrespective of any induced randomness. Dissection of the methods of treatment assignment shows that there are five basic approaches; pure randomizers, true randomizers, quasi-randomizers, permutation testers, and conventional modelers. Four of these have coherent design and analysis strategies, even though they are not mutually consistent, but the fifth and most prevalent approach (quasi-randomization) has little to recommend it. Design-adaptive allocation is defined, it is shown to provide valid inference, and a simulation indicates its efficiency advantage. In small studies, or large studies with many important prognostic covariates or analytic subgroups, design-adaptive allocation is an extremely attractive method of treatment assignment.  相似文献   

5.
Recently, the US Food and Drug Administration Oncology Center of Excellence initiated Project Optimus to reform the dose optimization and dose selection paradigm in oncology drug development. The agency pointed out that the current paradigm for dose selection—based on the maximum tolerated dose (MTD)—is not sufficient for molecularly targeted therapies and immunotherapies, for which efficacy may not increase after the dose reaches a certain level. In these cases, it is more appropriate to identify the optimal biological dose (OBD) that optimizes the risk–benefit tradeoff of the drug. Project Optimus has spurred tremendous interest and urgent need for guidance on designing dose optimization trials. In this article, we review several representative dose optimization designs, including model-based and model-assisted designs, and compare their operating characteristics based on 10,000 randomly generated scenarios with various dose-toxicity and dose-efficacy curves and some fixed representative scenarios. The results show that, compared with model-based designs, model-assisted methods have advantages of easy-to-implement, robustness, and high accuracy to identify OBD. Some guidance is provided to help biostatisticians and clinicians to choose appropriate dose optimization methods in practice.  相似文献   

6.
Designing Phase I clinical trials is challenging when accrual is slow or sample size is limited. The corresponding key question is: how to efficiently and reliably identify the maximum tolerated dose (MTD) using a sample size as small as possible? We propose model-assisted and model-based designs with adaptive intrapatient dose escalation (AIDE) to address this challenge. AIDE is adaptive in that the decision of conducting intrapatient dose escalation depends on both the patient's individual safety data, as well as other enrolled patient's safety data. When both data indicate reasonable safety, a patient may perform intrapatient dose escalation, generating toxicity data at more than one dose. This strategy not only provides patients the opportunity to receive higher potentially more effective doses, but also enables efficient statistical learning of the dose-toxicity profile of the treatment, which dramatically reduces the required sample size. Simulation studies show that the proposed designs are safe, robust, and efficient to identify the MTD with a sample size that is substantially smaller than conventional interpatient dose escalation designs. Practical considerations are provided and R code for implementing AIDE is available upon request.  相似文献   

7.
Existing projection designs (e.g. maximum projection designs) attempt to achieve good space-filling properties in all projections. However, when using a Gaussian process (GP), model-based design criteria such as the entropy criterion is more appropriate. We employ the entropy criterion averaged over a set of projections, called expected entropy criterion (EEC), to generate projection designs. We show that maximum EEC designs are invariant to monotonic transformations of the response, i.e. they are optimal for a wide class of stochastic process models. We also demonstrate that transformation of each column of a Latin hypercube design (LHD) based on a monotonic function can substantially improve the EEC. Two types of input transformations are considered: a quantile function of a symmetric Beta distribution chosen to optimize the EEC, and a nonparametric transformation corresponding to the quantile function of a symmetric density chosen to optimize the EEC. Numerical studies show that the proposed transformations of the LHD are efficient and effective for building robust maximum EEC designs. These designs give projections with markedly higher entropies and lower maximum prediction variances (MPV''s) at the cost of small increases in average prediction variances (APV''s) compared to state-of-the-art space-filling designs over wide ranges of covariance parameter values.  相似文献   

8.
In large epidemiological studies, budgetary or logistical constraints will typically preclude study investigators from measuring all exposures, covariates and outcomes of interest on all study subjects. We develop a flexible theoretical framework that incorporates a number of familiar designs such as case control and cohort studies, as well as multistage sampling designs. Our framework also allows for designed missingness and includes the option for outcome dependent designs. Our formulation is based on maximum likelihood and generalizes well known results for inference with missing data to the multistage setting. A variety of techniques are applied to streamline the computation of the Hessian matrix for these designs, facilitating the development of an efficient software tool to implement a wide variety of designs.  相似文献   

9.
Incorporating historical data has a great potential to improve the efficiency of phase I clinical trials and to accelerate drug development. For model-based designs, such as the continuous reassessment method (CRM), this can be conveniently carried out by specifying a “skeleton,” that is, the prior estimate of dose limiting toxicity (DLT) probability at each dose. In contrast, little work has been done to incorporate historical data into model-assisted designs, such as the Bayesian optimal interval (BOIN), Keyboard, and modified toxicity probability interval (mTPI) designs. This has led to the misconception that model-assisted designs cannot incorporate prior information. In this paper, we propose a unified framework that allows for incorporating historical data into model-assisted designs. The proposed approach uses the well-established “skeleton” approach, combined with the concept of prior effective sample size, thus it is easy to understand and use. More importantly, our approach maintains the hallmark of model-assisted designs: simplicity—the dose escalation/de-escalation rule can be tabulated prior to the trial conduct. Extensive simulation studies show that the proposed method can effectively incorporate prior information to improve the operating characteristics of model-assisted designs, similarly to model-based designs.  相似文献   

10.
This short communication supports that rule-based study designs such as the ‘3 + 3’ study design are still being used in early phase oncology development programs despite their inferior performance to model-based and model-assisted designs. Statisticians have an opportunity to shape and improve early phase oncology drug development programs by introducing newer, more efficient study designs that estimate the Optimal Biological dose to their oncology trialist colleges.  相似文献   

11.
Adaptive designs for multi-armed clinical trials have become increasingly popular recently because of their potential to shorten development times and to increase patient response. However, developing response-adaptive designs that offer patient-benefit while ensuring the resulting trial provides a statistically rigorous and unbiased comparison of the different treatments included is highly challenging. In this paper, the theory of Multi-Armed Bandit Problems is used to define near optimal adaptive designs in the context of a clinical trial with a normally distributed endpoint with known variance. We report the operating characteristics (type I error, power, bias) and patient-benefit of these approaches and alternative designs using simulation studies based on an ongoing trial. These results are then compared to those recently published in the context of Bernoulli endpoints. Many limitations and advantages are similar in both cases but there are also important differences, specially with respect to type I error control. This paper proposes a simulation-based testing procedure to correct for the observed type I error inflation that bandit-based and adaptive rules can induce.  相似文献   

12.
A bioequivalence test is to compare bioavailability parameters, such as the maximum observed concentration (Cmax) or the area under the concentration‐time curve, for a test drug and a reference drug. During the planning of a bioequivalence test, it requires an assumption about the variance of Cmax or area under the concentration‐time curve for the estimation of sample size. Since the variance is unknown, current 2‐stage designs use variance estimated from stage 1 data to determine the sample size for stage 2. However, the estimation of variance with the stage 1 data is unstable and may result in too large or too small sample size for stage 2. This problem is magnified in bioequivalence tests with a serial sampling schedule, by which only one sample is collected from each individual and thus the correct assumption of variance becomes even more difficult. To solve this problem, we propose 3‐stage designs. Our designs increase sample sizes over stages gradually, so that extremely large sample sizes will not happen. With one more stage of data, the power is increased. Moreover, the variance estimated using data from both stages 1 and 2 is more stable than that using data from stage 1 only in a 2‐stage design. These features of the proposed designs are demonstrated by simulations. Testing significance levels are adjusted to control the overall type I errors at the same level for all the multistage designs.  相似文献   

13.
This paper extends the work of Russell (1976). Proof Is given that, for many parameter sets, all O:XB designs belonging to the set are connected. It is shown how an (M,S)-optinal design nay be selected from the M-optimal designs of a given parameter set. The efficiencies of these (M,S)-optimal designs are displayed, and it is concluded that the (M,S)-optimality criterion is useful for selecting designs of high efficiency.  相似文献   

14.
Neoteric ranked set sampling (NRSS) is a recently developed sampling plan, derived from the well-known ranked set sampling (RSS) scheme. It has already been proved that NRSS provides more efficient estimators for population mean and variance compared to RSS and other sampling designs based on ranked sets. In this work, we propose and evaluate the performance of some two-stage sampling designs based on NRSS. Five different sampling schemes are proposed. Through an extensive Monte Carlo simulation study, we verified that all proposed sampling designs outperform RSS, NRSS, and the original double RSS design, producing estimators for the population mean with a lower mean square error. Furthermore, as with NRSS, two-stage NRSS estimators present some bias for asymmetric distributions. We complement the study with a discussion on the relative performance of the proposed estimators. Moreover, an additional simulation based on data of the diameter and height of pine trees is presented.  相似文献   

15.
Probability proportional to size (PPS) sampling is one of the most widely used designs for finite populations. We propose modifications to PPS designs with replacement and Rao–Hartley–Cochran design without replacement. These modifications consist of division of the population into two groups. Units in the first group are included in the sample with probability one. Under certain conditions, in both with and without replacement designs, the estimator of the population total based on the modified PPS sampling design is shown to be better than the corresponding estimator based on a PPS design. We illustrate our modification by an example and an application.  相似文献   

16.
Courses in sampling often lack a coherent structure because many related sampling designs, estimators, variances, and variance estimators are presented as separate cases. The Horvitz-Thompson theorem offers a needed integrating perspective for teaching the methods and fundamental concepts of probability sampling. Development of basic concepts in sampling via this approach provides the student with tools to solve more complicated problems, and helps to avoid some common stumbling blocks of beginning students. Examples from natural resource sampling are provided to illustrate applications and insight gained from this approach.  相似文献   

17.
We consider the method of distance sampling described by Buckland, Anderson, Burnham and Laake in 1993. We explore the properties of the methodology in simple cases chosen to allow direct and accessible comparisons of distance sampling in the design- and model-based frameworks. In particular, we obtain expressions for the bias and variance of the distance sampling estimator of object density and for the expected value of the recommended analytic variance estimator within each framework. These results enable us to clarify aspects of the performance of the methodology which may be of interest to users and potential users of distance sampling.  相似文献   

18.
Unbiased estimators for restricted adaptive cluster sampling   总被引:2,自引:0,他引:2  
In adaptive cluster sampling the size of the final sample is random, thus creating design problems. To get round this, Brown (1994) and Brown & Manly (1998) proposed a modification of the method, placing a restriction on the size of the sample, and using standard but biased estimators for estimating the population mean. But in this paper a new unbiased estimator and an unbiased variance estimator are proposed, based on estimators proposed by Murthy (1957) and extended to sequential and adaptive sampling designs by Salehi & Seber (2001). The paper also considers a restricted version of the adaptive scheme of Salehi & Seber (1997a) in which the networks are selected without replacement, and obtains unbiased estimators. The method is demonstrated by a simple example. Using simulation from this example, the new estimators are shown to compare very favourably with the standard biased estimators.  相似文献   

19.
If the population size is not a multiple of the sample size, then the usual linear systematic sampling design is unattractive, since the sample size obtained will either vary, or be constant and different to the required sample size. Only a few modified systematic sampling designs are known to deal with this problem and in the presence of linear trend, most of these designs do not provide favorable results. In this paper, a modified systematic sampling design, known as remainder modified systematic sampling (RMSS), is introduced. There are seven cases of RMSS and the results in this paper suggest that the proposed design is favorable, regardless of each case, while providing linear trend-free sampling results for three of the seven cases. To obtain linear trend-free sampling for the other cases and thus improve results, an end corrections estimator is constructed.  相似文献   

20.
Compared with most of the existing phase I designs, the recently proposed calibration-free odds (CFO) design has been demonstrated to be robust, model-free, and easy to use in practice. However, the original CFO design cannot handle late-onset toxicities, which have been commonly encountered in phase I oncology dose-finding trials with targeted agents or immunotherapies. To account for late-onset outcomes, we extend the CFO design to its time-to-event (TITE) version, which inherits the calibration-free and model-free properties. One salient feature of CFO-type designs is to adopt game theory by competing three doses at a time, including the current dose and the two neighboring doses, while interval-based designs only use the data at the current dose and is thus less efficient. We conduct comprehensive numerical studies for the TITE-CFO design under both fixed and randomly generated scenarios. TITE-CFO shows robust and efficient performances compared with interval-based and model-based counterparts. As a conclusion, the TITE-CFO design provides robust, efficient, and easy-to-use alternatives for phase I trials when the toxicity outcome is late-onset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号