首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper, we study the bioequivalence (BE) inference problem motivated by pharmacokinetic data that were collected using the serial sampling technique. In serial sampling designs, subjects are independently assigned to one of the two drugs; each subject can be sampled only once, and data are collected at K distinct timepoints from multiple subjects. We consider design and hypothesis testing for the parameter of interest: the area under the concentration–time curve (AUC). Decision rules in demonstrating BE were established using an equivalence test for either the ratio or logarithmic difference of two AUCs. The proposed t-test can deal with cases where two AUCs have unequal variances. To control for the type I error rate, the involved degrees-of-freedom were adjusted using Satterthwaite's approximation. A power formula was derived to allow the determination of necessary sample sizes. Simulation results show that, when the two AUCs have unequal variances, the type I error rate is better controlled by the proposed method compared with a method that only handles equal variances. We also propose an unequal subject allocation method that improves the power relative to that of the equal and symmetric allocation. The methods are illustrated using practical examples.  相似文献   

2.
A bioequivalence test is to compare bioavailability parameters, such as the maximum observed concentration (Cmax) or the area under the concentration‐time curve, for a test drug and a reference drug. During the planning of a bioequivalence test, it requires an assumption about the variance of Cmax or area under the concentration‐time curve for the estimation of sample size. Since the variance is unknown, current 2‐stage designs use variance estimated from stage 1 data to determine the sample size for stage 2. However, the estimation of variance with the stage 1 data is unstable and may result in too large or too small sample size for stage 2. This problem is magnified in bioequivalence tests with a serial sampling schedule, by which only one sample is collected from each individual and thus the correct assumption of variance becomes even more difficult. To solve this problem, we propose 3‐stage designs. Our designs increase sample sizes over stages gradually, so that extremely large sample sizes will not happen. With one more stage of data, the power is increased. Moreover, the variance estimated using data from both stages 1 and 2 is more stable than that using data from stage 1 only in a 2‐stage design. These features of the proposed designs are demonstrated by simulations. Testing significance levels are adjusted to control the overall type I errors at the same level for all the multistage designs.  相似文献   

3.
We consider a family of two-stage sampling methods for a binomial parameter that guarantee a certain precision. It is shown that, among all such methods, one due to Birnbaum and Healy minimizes the average expected second stage sample size with respect to a certain density on the parameter space. It does not, however, minimize the average expected second stage sample size with respect to the uniform density.  相似文献   

4.
《统计学通讯:理论与方法》2012,41(16-17):3278-3300
Under complex survey sampling, in particular when selection probabilities depend on the response variable (informative sampling), the sample and population distributions are different, possibly resulting in selection bias. This article is concerned with this problem by fitting two statistical models, namely: the variance components model (a two-stage model) and the fixed effects model (a single-stage model) for one-way analysis of variance, under complex survey design, for example, two-stage sampling, stratification, and unequal probability of selection, etc. Classical theory underlying the use of the two-stage model involves simple random sampling for each of the two stages. In such cases the model in the sample, after sample selection, is the same as model for the population; before sample selection. When the selection probabilities are related to the values of the response variable, standard estimates of the population model parameters may be severely biased, leading possibly to false inference. The idea behind the approach is to extract the model holding for the sample data as a function of the model in the population and of the first order inclusion probabilities. And then fit the sample model, using analysis of variance, maximum likelihood, and pseudo maximum likelihood methods of estimation. The main feature of the proposed techniques is related to their behavior in terms of the informativeness parameter. We also show that the use of the population model that ignores the informative sampling design, yields biased model fitting.  相似文献   

5.
The problem of comparing several experimental treatments to a standard arises frequently in medical research. Various multi-stage randomized phase II/III designs have been proposed that select one or more promising experimental treatments and compare them to the standard while controlling overall Type I and Type II error rates. This paper addresses phase II/III settings where the joint goals are to increase the average time to treatment failure and control the probability of toxicity while accounting for patient heterogeneity. We are motivated by the desire to construct a feasible design for a trial of four chemotherapy combinations for treating a family of rare pediatric brain tumors. We present a hybrid two-stage design based on two-dimensional treatment effect parameters. A targeted parameter set is constructed from elicited parameter pairs considered to be equally desirable. Bayesian regression models for failure time and the probability of toxicity as functions of treatment and prognostic covariates are used to define two-dimensional covariate-adjusted treatment effect parameter sets. Decisions at each stage of the trial are based on the ratio of posterior probabilities of the alternative and null covariate-adjusted parameter sets. Design parameters are chosen to minimize expected sample size subject to frequentist error constraints. The design is illustrated by application to the brain tumor trial.  相似文献   

6.
Simon's two-stage designs are widely used in clinical trials to assess the activity of a new treatment. In practice, it is often the case that the second stage sample size is different from the planned one. For this reason, the critical value for the second stage is no longer valid for statistical inference. Existing approaches for making statistical inference are either based on asymptotic methods or not optimal. We propose an approach to maximize the power of a study while maintaining the type I error rate, where the type I error rate and power are calculated exactly from binomial distributions. The critical values of the proposed approach are numerically searched by an intelligent algorithm over the complete parameter space. It is guaranteed that the proposed approach is at least as powerful as the conditional power approach which is a valid but non-optimal approach. The power gain of the proposed approach can be substantial as compared to the conditional power approach. We apply the proposed approach to a real Phase II clinical trial.  相似文献   

7.
8.
This paper considers the effects of informative two-stage cluster sampling on estimation and prediction. The aims of this article are twofold: first to estimate the parameters of the superpopulation model for two-stage cluster sampling from a finite population, when the sampling design for both stages is informative, using maximum likelihood estimation methods based on the sample-likelihood function; secondly to predict the finite population total and to predict the cluster-specific effects and the cluster totals for clusters in the sample and for clusters not in the sample. To achieve this we derive the sample and sample-complement distributions and the moments of the first and second stage measurements. Also we derive the conditional sample and conditional sample-complement distributions and the moments of the cluster-specific effects given the cluster measurements. It should be noted that classical design-based inference that consists of weighting the sample observations by the inverse of sample selection probabilities cannot be applied for the prediction of the cluster-specific effects for clusters not in the sample. Also we give an alternative justification of the Royall [1976. The linear least squares prediction approach to two-stage sampling. Journal of the American Statistical Association 71, 657–664] predictor of the finite population total under two-stage cluster population. Furthermore, small-area models are studied under informative sampling.  相似文献   

9.
自加权分层多阶段抽样设计具有三大特征:一为除第一阶抽样外其余各阶抽样的样本量均为常数,二为样本量按照各层的最终单元数量在各层比例分配,三为前几阶采用抽样而最后一阶采用放回或不放回的简单随机抽样。根据上述三个特征设计了中国人口变动调查的自加权抽样设计。  相似文献   

10.
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models.  相似文献   

11.
The planning of bioequivalence (BE) studies, as for any clinical trial, requires a priori specification of an effect size for the determination of power and an assumption about the variance. The specified effect size may be overly optimistic, leading to an underpowered study. The assumed variance can be either too small or too large, leading, respectively, to studies that are underpowered or overly large. There has been much work in the clinical trials field on various types of sequential designs that include sample size reestimation after the trial is started, but these have seen only little use in BE studies. The purpose of this work was to validate at least one such method for crossover design BE studies. Specifically, we considered sample size reestimation for a two-stage trial based on the variance estimated from the first stage. We identified two methods based on Pocock's method for group sequential trials that met our requirement for at most negligible increase in type I error rate.  相似文献   

12.
Abstract

In the present article, an effort has been made to develop calibration estimators of the population mean under two-stage stratified random sampling design when auxiliary information is available at primary stage unit (psu) level. The properties of the developed estimators are derived in-terms of design based approximate variance and approximate consistent design based estimator of the variance. Some simulation studies have been conducted to investigate the relative performance of calibration estimator over the usual estimator of the population mean without using auxiliary information in two-stage stratified random sampling. Proposed calibration estimators have outperformed the usual estimator without using auxiliary information.  相似文献   

13.
The effect of serial correlation on acceptance sampling plans by variables has been examined in this paper assuming the quality measurements follow an AR(p) process. The effect of serial correlation can be examined by comparing OC curves, sample size and producer's risks, ∝, with that of the independent case when the process standard deviation, σ, is known. When σ is unknown and for large n, sampling plans can be constructed using the central limit theorem. However, for σ unknown and for small n, there is no satisfactory method of obtaining sampling plans.  相似文献   

14.
The phase II clinical trials often use the binary outcome. Thus, accessing the success rate of the treatment is a primary objective for the phase II clinical trials. Reporting confidence intervals is a common practice for clinical trials. Due to the group sequence design and relatively small sample size, many existing confidence intervals for phase II trials are much conservative. In this paper, we propose a class of confidence intervals for binary outcomes. We also provide a general theory to assess the coverage of confidence intervals for discrete distributions, and hence make recommendations for choosing the parameter in calculating the confidence interval. The proposed method is applied to Simon's [14] optimal two-stage design with numerical studies. The proposed method can be viewed as an alternative approach for the confidence interval for discrete distributions in general.  相似文献   

15.
ABSTRACT

A vast majority of the literature on the design of sampling plans by variables assumes that the distribution of the quality characteristic variable is normal, and that only its mean varies while its variance is known and remains constant. But, for many processes, the quality variable is nonnormal, and also either one or both of the mean and the variance of the variable can vary randomly. In this paper, an optimal economic approach is developed for design of plans for acceptance sampling by variables having Inverse Gaussian (IG) distributions. The advantage of developing an IG distribution based model is that it can be used for diverse quality variables ranging from highly skewed to almost symmetrical. We assume that the process has two independent assignable causes, one of which shifts the mean of the quality characteristic variable of a product and the other shifts the variance. Since a product quality variable may be affected by any one or both of the assignable causes, three different likely cases of shift (mean shift only, variance shift only, and both mean and variance shift) have been considered in the modeling process. For all of these likely scenarios, mathematical models giving the cost of using a variable acceptance sampling plan are developed. The cost models are optimized in selecting the optimal sampling plan parameters, such as the sample size, and the upper and lower acceptance limits. A large set of numerical example problems is solved for all the cases. Some of these numerical examples are also used in depicting the consequences of: 1) using the assumption that the quality variable is normally distributed when the true distribution is IG, and 2) using sampling plans from the existing standards instead of the optimal plans derived by the methodology developed in this paper. Sensitivities of some of the model input parameters are also studied using the analysis of variance technique. The information obtained on the parameter sensitivities can be used by the model users on prudently allocating resources for estimation of input parameters.  相似文献   

16.
17.
Bryant, Hartley & Jessen (1960) presented a two‐way stratification sampling design when the sample size n is less than the number of strata. Their design was extended to a three‐way stratification case by Chaudhary & Kumar (1988) , but this design does not take into account serial correlation, which might be present as a result of the presence of a time variable. In this paper, a new sampling procedure is presented for three‐way stratification when one of the stratifying variables is time. The purpose of such a design is to take into account serial correlation. The variance of the unweighted estimator of the population mean with respect to a super population model is used as the basis for comparison. Simulation results show that the suggested design is more efficient than the Chaudhary & Kumar (1988) design.  相似文献   

18.
19.
ABSTRACT

Recently, distance sampling emerged as an advantageous technique to estimate the abundance of many animal populations, including ungulates. Its basic design involves the random selection of several samplers (transects or points) within the population range, and a Horvitz–Thompson-like estimator is then applied to estimate the population abundance while correcting for animal detectability. Ensuring even coverage probability is essential for subsequent inference on the population size, but it may not be achievable because of limited access to parts of the population range. Moreover, in several environmental conditions, a random selection of samplers may induce very high survey costs because it does not minimize the displacement time of the observer(s) between successive samplers. We thus tested whether two-stage designs – based on the random selection of points and then of nearby samplers – could be more cost-effective, for a given population size and when even area coverage cannot be guaranteed. Here, we further extend our analyses to assess the performance of two-stage designs under varying animal densities.  相似文献   

20.
I consider the design of multistage sampling schemes for epidemiologic studies involving latent variable models, with surrogate measurements of the latent variables on a subset of subjects. Such models arise in various situations: when detailed exposure measurements are combined with variables that can be used to assign exposures to unmeasured subjects; when biomarkers are obtained to assess an unobserved pathophysiologic process; or when additional information is to be obtained on confounding or modifying variables. In such situations, it may be possible to stratify the subsample on data available for all subjects in the main study, such as outcomes, exposure predictors, or geographic locations. Three circumstances where analytic calculations of the optimal design are possible are considered: (i) when all variables are binary; (ii) when all are normally distributed; and (iii) when the latent variable and its measurement are normally distributed, but the outcome is binary. In each of these cases, it is often possible to considerably improve the cost efficiency of the design by appropriate selection of the sampling fractions. More complex situations arise when the data are spatially distributed: the spatial correlation can be exploited to improve exposure assignment for unmeasured locations using available measurements on neighboring locations; some approaches for informative selection of the measurement sample using location and/or exposure predictor data are considered.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号