首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A common question in the analysis of binary data is how to deal with overdispersion. One widely advocated sampling distribution for overdispersed binary data is the beta-binomial model. For example, this distribution is often used to model litter effects in toxicological experiments. Testing the null hypothesis of a beta-binomial distribution against all other distributions is difficult, however, when the litter sizes vary greatly. Herein, we propose a test statistic based on combining Pearson statistics from individual litter sizes, and estimate the p-value using bootstrap techniques. A Monte Carlo study confirms the accuracy and power of the test against a beta-binomial distribution contaminated with a few outliers. The method is applied to data from environmental toxicity studies.  相似文献   

2.
Abstract

Teratological experiments are controlled dose-response studies in which impregnated animals are randomly assigned to various exposure levels of a toxic substance. Subsequently, both continuous and discrete responses are recorded on the litters of fetuses that these animals produce. Discrete responses are usually binary in nature, such as the presence or absence of some fetal anomaly. This clustered binary data usually exhibits over-dispersion (or under-dispersion), which can be interpreted as either variation between litter response probabilities or intralitter correlation. To model the correlation and/or variation, the beta-binomial distribution has been assumed for the number of positive fetal responses within a litter. Although the mean of the beta-binomial model has been linked to dose-response functions, in terms of measuring over-dispersion, it may be a restrictive method in modeling data from teratological studies. Also for certain toxins, a threshold effect has been observed in the dose-response pattern of the data. We propose to incorporate a random effect into a general threshold dose-response model to account for the variation in responses, while at the same time estimating the threshold effect. We fit this model to a well-known data set in the field of teratology. Simulation studies are performed to assess the validity of the random effects threshold model in these types of studies.  相似文献   

3.
The Bayesian design approach accounts for uncertainty of the parameter values on which optimal design depends, but Bayesian designs themselves depend on the choice of a prior distribution for the parameter values. This article investigates Bayesian D-optimal designs for two-parameter logistic models, using numerical search. We show three things: (1) a prior with large variance leads to a design that remains highly efficient under other priors, (2) uniform and normal priors lead to equally efficient designs, and (3) designs with four or five equidistant equally weighted design points are highly efficient relative to the Bayesian D-optimal designs.  相似文献   

4.
The beta-binomial distribution, which is generated by a simple mixture model, has been widely applied in the social, physical, and health sciences. Problems of estimation, inference, and prediction have been addressed in the past, but not in a Bayesian framework. This article develops Bayesian procedures for the beta-binomial model and, using a suitable reparameterization, establishes a conjugate-type property for a beta family of priors. The transformed parameters have interesting interpretations, especially in marketing applications, and are likely to be more stable. More specifically, one of these parameters is the market share and the other is a measure of the heterogeneity of the customer population. Analytical results are developed for the posterior and prediction quantities, although the numerical evaluation is not trivial. Since the posterior moments are more easily calculated, we also propose the use of posterior approximation using the Pearson system. A particular case (when there are two trials), which occurs in taste testing, brand choice, media exposure, and some epidemiological applications, is analyzed in detail. Simulated and real data are used to demonstrate the feasibility of the calculations. The simulation results effectively demonstrate the superiority of Bayesian estimators, particularly in small samples, even with uniform (“non-informed”) priors. Naturally, “informed” priors can give even better results. The real data on television viewing behavior are used to illustrate the prediction results. In our analysis, several problems with the maximum likelihood estimators are encountered. The superior properties and performance of the Bayesian estimators and the excellent approximation results are strong indications that our results will be potentially of high value in small sample applications of the beta-binomial and in cases in which significant prior information exists.  相似文献   

5.
We investigate robust M-estimators of location and over-dispersion for independent and identically distributed samples from Poisson and Negative Binomial (NB)distributions. We focus on asymptotic and small-sample efficiencies, outlier-induced biases, and biases caused by model mis-specification. This is important information for assessing the practical utility of the estimation method. Our results demonstrate that resonably efficient estimation of location and over-dispersion parameters for count data is possible with samples sizes as small as n=25. The sensitivity of these stimators, especially when the amount of over-dispersion is small. We aslo conclude that serious biases result when using robust Poisson M-estimation with NB data. The biases are less serious when using robust NB M-estimation with Poisson data.  相似文献   

6.
Two-stage k-sample designs for the ordered alternative problem   总被引:2,自引:0,他引:2  
In preclinical studies and clinical dose-ranging trials, the Jonckheere-Terpstra test is widely used in the assessment of dose-response relationships. Hewett and Spurrier (1979) presented a two-stage analog of the test in the context of large sample sizes. In this paper, we propose an exact test based on Simon's minimax and optimal design criteria originally used in one-arm phase II designs based on binary endpoints. The convergence rate of the joint distribution of the first and second stage test statistics to the limiting distribution is studied, and design parameters are provided for a variety of assumed alternatives. The behavior of the test is also examined in the presence of ties, and the proposed designs are illustrated through application in the planning of a hypercholesterolemia clinical trial. The minimax and optimal two-stage procedures are shown to be preferable as compared with the one-stage procedure because of the associated reduction in expected sample size for given error constraints.  相似文献   

7.
To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design.  相似文献   

8.
A bioequivalence test is to compare bioavailability parameters, such as the maximum observed concentration (Cmax) or the area under the concentration‐time curve, for a test drug and a reference drug. During the planning of a bioequivalence test, it requires an assumption about the variance of Cmax or area under the concentration‐time curve for the estimation of sample size. Since the variance is unknown, current 2‐stage designs use variance estimated from stage 1 data to determine the sample size for stage 2. However, the estimation of variance with the stage 1 data is unstable and may result in too large or too small sample size for stage 2. This problem is magnified in bioequivalence tests with a serial sampling schedule, by which only one sample is collected from each individual and thus the correct assumption of variance becomes even more difficult. To solve this problem, we propose 3‐stage designs. Our designs increase sample sizes over stages gradually, so that extremely large sample sizes will not happen. With one more stage of data, the power is increased. Moreover, the variance estimated using data from both stages 1 and 2 is more stable than that using data from stage 1 only in a 2‐stage design. These features of the proposed designs are demonstrated by simulations. Testing significance levels are adjusted to control the overall type I errors at the same level for all the multistage designs.  相似文献   

9.
Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha‐spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha‐spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.  相似文献   

10.
11.
In spatial statistics, the correct identification of a variogram model when fitted to an empirical variogram depends on many factors. Here, simulation experiments show fitting based on the variogram cloud is preferable to that based on Matheron's and Cressie–Hawkins empirical variogram estimators. For correct model specification, a number of models should be fitted to the empirical variogram using a grid of cut-off values, and recommendations are given for best choice. A design where roughly half the maximum distance between points equals the practical range works well for correct variogram identification of any model, with varying nugget sizes and sample sizes.  相似文献   

12.
In this article, we are interested in comparing growth curves for the Red Delicious apple in several locations to that of a reference site. Although such multiple comparisons are common for linear models, statistical techniques for nonlinear models are not prolific. We theoretically derive a test statistic, considering the issues of sample size and design points. Under equal sample sizes and same design points, our test statistic is based on the maximum of an equi-correlated multivariate chi-square distribution. Under unequal sample sizes and design points, we derive a general correlation structure, and then utilize the multivariate normal distribution to numerically compute critical points for the maximum of the multivariate chi-square. We apply this statistical technique to compare the growth of Red Delicious apples at six locations to a reference site in the state of Washington in 2009. Finally, we perform simulations to verify the performance of our proposed procedure for Type I error and marginal power. Our proposed method performs well in regard to both.  相似文献   

13.
Confirmatory bioassay experiments take place in late stages of the drug discovery process when a small number of compounds have to be compared with respect to their properties. As the cost of the observations may differ considerably, the design problem is well specified by the cost of compound used rather than by the number of observations. We show that cost-efficient designs can be constructed using useful properties of the minimum support designs. These designs are particularly suited for studies where the parameters of the model to be estimated are known with high accuracy prior to the experiment, although they prove to be robust against typical inaccuracies of these values. When the parameters of the model can only be specified with ranges of values or by a probability distribution, we use a Bayesian criterion of optimality to construct the required designs. Typically, the number of their support points depends on the prior knowledge for the model parameters. In all cases we recommend identifying a set of designs with good statistical properties but different potential costs to choose from.  相似文献   

14.
In Computer Experiments (CE), a careful selection of the design points is essential for predicting the system response at untried points, based on the values observed at tried points. In physical experiments, the protocol is based on Design of Experiments, a methodology whose basic principles are questioned in CE. When the responses of a CE are modeled as jointly Gaussian random variables with their covariance depending on the distance between points, the use of the so called space-filling designs (random designs, stratified designs and Latin Hypercube designs) is a common choice, because it is expected that the nearer the untried point is to the design points, the better is the prediction. In this paper we focus on the class of Latin Hypercube (LH) designs. The behavior of various LH designs is examined according to the Gaussian assumption with exponential correlation, in order to minimize the total prediction error at the points of a regular lattice. In such a special case, the problem is reduced to an algebraic statistical model, which is solved using both symbolic algebraic software and statistical software. We provide closed-form computation of the variance of the Gaussian linear predictor as a function of the design, in order to make a comparison between LH designs. In principle, the method applies to any number of factors and any number of levels, and also to classes of designs other than LHs. In our current implementation, the applicability is limited by the high computational complexity of the algorithms involved.  相似文献   

15.
Historical control trials compare an experimental treatment with a previously conducted control treatment. By assigning all recruited samples to the experimental arm, historical control trials can better identify promising treatments in early phase trials compared with randomized control trials. Existing designs of historical control trials with survival endpoints are based on asymptotic normal distribution. However, it remains unclear whether the asymptotic distribution of the test statistic is close enough to the true distribution given relatively small sample sizes in early phase trials. In this article, we address this question by introducing an exact design approach for exponentially distributed survival endpoints, and compare it with an asymptotic design in both real examples and simulation examples. Simulation results show that the asymptotic test could lead to bias in the sample size estimation. We conclude the proposed exact design should be used in the design of historical control trials.  相似文献   

16.
Summary.  The number of people to select within selected households has significant consequences for the conduct and output of household surveys. The operational and data quality implications of this choice are carefully considered in many surveys, but the effect on statistical efficiency is not well understood. The usual approach is to select all people in each selected household, where operational and data quality concerns make this feasible. If not, one person is usually selected from each selected household. We find that this strategy is not always justified, and we develop intermediate designs between these two extremes. Current practices were developed when household survey field procedures needed to be simple and robust; however, more complex designs are now feasible owing to the increasing use of computer-assisted interviewing. We develop more flexible designs by optimizing survey cost, based on a simple cost model, subject to a required variance for an estimator of population total. The innovation lies in the fact that household sample sizes are small integers, which creates challenges in both design and estimation. The new methods are evaluated empirically by using census and health survey data, showing considerable improvement over existing methods in some cases.  相似文献   

17.
Instead of using traditional separate phase I and II trials, in this article, we propose using a parallel three-stage phase I/II design, incorporating a dose expansion approach to flexibly evaluate the safety and efficacy of dose levels, and to select the optimal dose. In the proposed design, both the toxicity and efficacy responses are binary endpoints. A 3+3-based procedure is used for initial period of dose escalation at stage 1; at this level, the dose can be expanded to stage 2 for exploratory efficacy studies of phase IIa, while simultaneously, the safety testing can advance to a higher dose level. A beta-binomial model is used to model the efficacy responses. There are two placebo-controlled randomization interim monitoring analyses at stage 2 to select the promising doses to be recommended to stage 3 for further efficacy studies of phase IIb. An adaptive randomization approach is used to assign more patients to doses with higher efficacy levels at stage 3. We examine the properties of the proposed design through extensive simulation studies by using R programming language, and also compare the new design with the conventional design and a competing adaptive Bayesian design. The simulation results show that our design can efficiently assign more patients to doses with higher efficacy levels and is superior to the two competing designs in terms of total sample size reduction.  相似文献   

18.
Many two-phase sampling designs have been applied in practice to obtain efficient estimates of regression parameters while minimizing the cost of data collection. This research investigates two-phase sampling designs for so-called expensive variable problems, and compares them with one-phase designs. Closed form expressions for the asymptotic relative efficiency of maximum likelihood estimators from the two designs are derived for parametric normal models, providing insight into the available information for regression coefficients under the two designs. We further discuss when we should apply the two-phase design and how to choose the sample sizes for two-phase samples. Our numerical study indicates that the results can be applied to more general settings.  相似文献   

19.
The likelihood equations based on a progressively Type II censored sample from a Type I generalized logistic distribution do not provide explicit solutions for the location and scale parameters. We present a simple method of deriving explicit estimators by approximating the likelihood equations appropriately. We examine numerically the bias and variance of these estimators and show that these estimators are as efficient as the maximum likelihood estimators (MLEs). The probability coverages of the pivotal quantities (for location and scale parameters) based on asymptotic normality are shown to be unsatisfactory, especially when the effective sample size is small. Therefore we suggest using unconditional simulated percentage points of these pivotal quantities for the construction of confidence intervals. A wide range of sample sizes and progressive censoring schemes have been considered in this study. Finally, we present a numerical example to illustrate the methods of inference developed here.  相似文献   

20.
Crossover designs have some advantages over standard clinical trial designs and they are often used in trials evaluating the efficacy of treatments for infertility. However, clinical trials of infertility treatments violate a fundamental condition of crossover designs, because women who become pregnant in the first treatment period are not treated in the second period. In previous research, to deal with this problem, some new designs, such as re‐randomization designs, and analysis methods including the logistic mixture model and the beta‐binomial mixture model were proposed. Although the performance of these designs and methods has previously been evaluated in large‐scale clinical trials with sample sizes of more than 1000 per group, the actual sample sizes of infertility treatment trials are usually around 100 per group. The most appropriate design and analysis for these moderate‐scale clinical trials are currently unclear. In this study, we conducted simulation studies to determine the appropriate design and analysis method of moderate‐scale clinical trials for irreversible endpoints by evaluating the statistical power and bias in the treatment effect estimates. The Mantel–Haenszel method had similar power and bias to the logistic mixture model. The crossover designs had the highest power and the smallest bias. We recommend using a combination of the crossover design and the Mantel–Haenszel method for two‐period, two‐treatment clinical trials with irreversible endpoints. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号