首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A bioequivalence test is to compare bioavailability parameters, such as the maximum observed concentration (Cmax) or the area under the concentration‐time curve, for a test drug and a reference drug. During the planning of a bioequivalence test, it requires an assumption about the variance of Cmax or area under the concentration‐time curve for the estimation of sample size. Since the variance is unknown, current 2‐stage designs use variance estimated from stage 1 data to determine the sample size for stage 2. However, the estimation of variance with the stage 1 data is unstable and may result in too large or too small sample size for stage 2. This problem is magnified in bioequivalence tests with a serial sampling schedule, by which only one sample is collected from each individual and thus the correct assumption of variance becomes even more difficult. To solve this problem, we propose 3‐stage designs. Our designs increase sample sizes over stages gradually, so that extremely large sample sizes will not happen. With one more stage of data, the power is increased. Moreover, the variance estimated using data from both stages 1 and 2 is more stable than that using data from stage 1 only in a 2‐stage design. These features of the proposed designs are demonstrated by simulations. Testing significance levels are adjusted to control the overall type I errors at the same level for all the multistage designs.  相似文献   

2.
In some applications it is cost efficient to sample data in two or more stages. In the first stage a simple random sample is drawn and then stratified according to some easily measured attribute. In each subsequent stage a random subset of previously selected units is sampled for more detailed and costly observation, with a unit's sampling probability determined by its attributes as observed in the previous stages. This paper describes multistage sampling designs and estimating equations based on the resulting data. Maximum likelihood estimates (MLEs) and their asymptotic variances are given for designs using parametric models. Horvitz–Thompson estimates are introduced as alternatives to MLEs, their asymptotic distributions are derived and their strengths and weaknesses are evaluated. The designs and the estimates are illustrated with data on corn production.  相似文献   

3.
The use of complex sampling designs in population-based case–control studies is becoming more common, particularly for sampling the control population. This is prompted by all the usual cost and logistical benefits that are conferred by multistage sampling. Complex sampling has often been ignored in analysis but, with the advent of packages like SUDAAN, survey-weighted analyses that take account of the sample design can be carried out routinely. This paper explores this approach and more efficient alternatives, which can also be implemented by using readily available software.  相似文献   

4.
In large epidemiological studies, budgetary or logistical constraints will typically preclude study investigators from measuring all exposures, covariates and outcomes of interest on all study subjects. We develop a flexible theoretical framework that incorporates a number of familiar designs such as case control and cohort studies, as well as multistage sampling designs. Our framework also allows for designed missingness and includes the option for outcome dependent designs. Our formulation is based on maximum likelihood and generalizes well known results for inference with missing data to the multistage setting. A variety of techniques are applied to streamline the computation of the Hessian matrix for these designs, facilitating the development of an efficient software tool to implement a wide variety of designs.  相似文献   

5.
Consider the problem of estimating a dose with a certain response rate. Many multistage dose‐finding designs for this problem were originally developed for oncology studies where the mean dose–response is strictly increasing in dose. In non‐oncology phase II dose‐finding studies, the dose–response curve often plateaus in the range of interest, and there are several doses with the mean response equal to the target. In this case, it is usually of interest to find the lowest of these doses because higher doses might have higher adverse event rates. It is often desirable to compare the response rate at the estimated target dose with a placebo and/or active control. We investigate which of the several known dose‐finding methods developed for oncology phase I trials is the most suitable when the dose–response curve plateaus. Some of the designs tend to spread the allocation among the doses on the plateau. Others, such as the continual reassessment method and the t‐statistic design, concentrate allocation at one of the doses with the t‐statistic design selecting the lowest dose on the plateau more frequently. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
Many two-phase sampling designs have been applied in practice to obtain efficient estimates of regression parameters while minimizing the cost of data collection. This research investigates two-phase sampling designs for so-called expensive variable problems, and compares them with one-phase designs. Closed form expressions for the asymptotic relative efficiency of maximum likelihood estimators from the two designs are derived for parametric normal models, providing insight into the available information for regression coefficients under the two designs. We further discuss when we should apply the two-phase design and how to choose the sample sizes for two-phase samples. Our numerical study indicates that the results can be applied to more general settings.  相似文献   

7.
Abstract

Stratification provides a powerful tool for improving the efficiency and, being suitable for various sampling situations, it is commonly used in practice. Motivated by the utility of stratified sampling scheme, we focus on studying behavior of the estimator of proportion of a sensitive attribute while dealing with non-identical Bernoulli trials in survey research. The objective is achieved by considering a general randomized response model. Relative efficiency comparisons are presented along with cost analysis by considering different cost functions. Stratified random sampling is observed to be yielding more precise estimators.  相似文献   

8.
In this paper we consider a family of sampling designs for which increasing first‐order inclusion probabilities imply, in a specific sense, increasing conditional inclusion probabilities. It is proved that the complementary Midzuno, the conditional Poisson, and the Sampford designs belong to this family. It is shown that designs of the family are more efficient than a comparable with‐replacement design. Furthermore, the efficiency gain is explicitly given for these designs.  相似文献   

9.
When estimating in a practical situation, asymmetric loss functions are preferred over squared error loss functions, as the former is more appropriate than the latter in many estimation problems. We consider here the problem of fixed precision point estimation of a linear parametric function in beta for the multiple linear regression model using asymmetric loss functions. Due to the presence of nuissance parameters, the sample size for the estimation problem is not known beforehand and hence we take the recourse of adaptive multistage sampling methodologies. We discuss here some multistage sampling techniques and compare the performances of these methodologies using simulation runs. The implementation of the codes for our proposed models is accomplished utilizing MATLAB 7.0.1 program run on a Pentium IV machine. Finally, we highlight the significance of such asymmetric loss functions with few practical examples.  相似文献   

10.
An outcome-dependent sampling (ODS) design is a retrospective sampling scheme where one observes the primary exposure variables with a probability that depends on the observed value of the outcome variable. When the outcome of interest is failure time, the observed data are often censored. By allowing the selection of the supplemental samples depends on whether the event of interest happens or not and oversampling subjects from the most informative regions, ODS design for the time-to-event data can reduce the cost of the study and improve the efficiency. We review recent progresses and advances in research on ODS designs with failure time data. This includes researches on ODS related designs like case–cohort design, generalized case–cohort design, stratified case–cohort design, general failure-time ODS design, length-biased sampling design and interval sampling design.  相似文献   

11.
In this article we discuss multistage group screening in which group-factors contain differing number of factors. We describe a procedure for grouping the factors in the absence of concrete prior information, so that the relative testing cost is minimal. It Is shown that under quite general conditions, these designs will require fewer runs than the equivalent designs in which the group-factors contain same number of factors.  相似文献   

12.
Low income proportion is an important index in comparisons of poverty in countries around the world. The stability of a society depends heavily on this index. An accurate and reliable estimation of this index plays an important role for government's economic policies. In this paper, the authors study empirical likelihood‐based inferences for a low income proportion under the simple random sampling and stratified random sampling designs. It is shown that the limiting distributions of the empirical likelihood ratios for the low income proportion are the scaled chi‐square distributions. The authors propose various empirical likelihood‐based confidence intervals for the low income proportion. Extensive simulation studies are conducted to evaluate the relative performance of the normal approximation‐based interval, bootstrap‐based intervals, and the empirical likelihood‐based intervals. The proposed methods are also applied to analyzing a real economic survey income dataset. The Canadian Journal of Statistics 39: 1–16; 2011 ©2011 Statistical Society of Canada  相似文献   

13.
Modeling survey data often requires having the knowledge of design and weighting variables. With public-use survey data, some of these variables may not be available for confidentiality reasons. The proposed approach can be used in this situation, as long as calibrated weights and variables specifying the strata and primary sampling units are available. It gives consistent point estimation and a pivotal statistics for testing and confidence intervals. The proposed approach does not rely on with-replacement sampling, single-stage, negligible sampling fractions, or noninformative sampling. Adjustments based on design effects, eigenvalues, joint-inclusion probabilities or bootstrap, are not needed. The inclusion probabilities and auxiliary variables do not have to be known. Multistage designs with unequal selection of primary sampling units are considered. Nonresponse can be easily accommodated if the calibrated weights include reweighting adjustment for nonresponse. We use an unconditional approach, where the variables and sample are random variables. The design can be informative.  相似文献   

14.
Bayesian hierarchical formulations are utilized by the U.S. Bureau of Labor Statistics (BLS) with respondent‐level data for missing item imputation because these formulations are readily parameterized to capture correlation structures. BLS collects survey data under informative sampling designs that assign probabilities of inclusion to be correlated with the response on which sampling‐weighted pseudo posterior distributions are estimated for asymptotically unbiased inference about population model parameters. Computation is expensive and does not support BLS production schedules. We propose a new method to scale the computation that divides the data into smaller subsets, estimates a sampling‐weighted pseudo posterior distribution, in parallel, for every subset and combines the pseudo posterior parameter samples from all the subsets through their mean in the Wasserstein space of order 2. We construct conditions on a class of sampling designs where posterior consistency of the proposed method is achieved. We demonstrate on both synthetic data and in application to the Current Employment Statistics survey that our method produces results of similar accuracy as the usual approach while offering substantially faster computation.  相似文献   

15.
In outcome‐dependent sampling, the continuous or binary outcome variable in a regression model is available in advance to guide selection of a sample on which explanatory variables are then measured. Selection probabilities may either be a smooth function of the outcome variable or be based on a stratification of the outcome. In many cases, only data from the final sample is accessible to the analyst. A maximum likelihood approach for this data configuration is developed here for the first time. The likelihood for fully general outcome‐dependent designs is stated, then the special case of Poisson sampling is examined in more detail. The maximum likelihood estimator differs from the well‐known maximum sample likelihood estimator, and an information bound result shows that the former is asymptotically more efficient. A simulation study suggests that the efficiency difference is generally small. Maximum sample likelihood estimation is therefore recommended in practice when only sample data is available. Some new smooth sample designs show considerable promise.  相似文献   

16.
We propose a randomized minima–maxima nomination (RMMN) sampling design for use in finite populations. We derive the first- and second-order inclusion probabilities for both with and without replacement variations of the design. The inclusion probabilities for the without replacement variation are derived using a non-homogeneous Markov process. The design is simple to implement and results in simple and easy to calculate estimators and variances. It generalizes maxima nomination sampling for use in finite populations and includes some other sampling designs as special cases. We provide some optimality results and show that, in the context of finite population sampling, maxima nomination sampling is not generally the optimum design to follow. We also show, through numerical examples and a case study, that the proposed design can result in significant improvements in efficiency compared to simple random sampling without replacement designs for a wide choice of population types. Finally, we describe a bootstrap method for choosing values of the design parameters.  相似文献   

17.
When measuring units are expensive or time consuming, while ranking them is relatively easy and inexpensive, it is known that ranked set sampling (RSS) is preferable to simple random sampling (SRS). Many authors have suggested several extensions of RSS. As a variation, Al-Saleh and Al-Kadiri [Double ranked set sampling, Statist. Probab. Lett. 48 (2000), pp. 205–212] introduced double ranked set sampling (DRSS) and it was extended by Al-Saleh and Al-Omari [Multistage ranked set sampling, J. Statist. Plann. Inference 102 (2002), pp. 273–286] to multistage ranked set sampling (MSRSS). The entropy of a random variable (r.v.) is a measure of its uncertainty. It is a measure of the amount of information required on the average to determine the value of a (discrete) r.v.. In this work, we discuss entropy estimation in RSS design and aforementioned extensions and compare the results with those in SRS design in terms of bias and root mean square error (RMSE). Motivated by the above observed efficiency, we continue to investigate entropy-based goodness-of-fit test for the inverse Gaussian distribution using RSS. Critical values for some sample sizes determined by means of Monte Carlo simulations are presented for each design. A Monte Carlo power analysis is performed under various alternative hypotheses in order to compare the proposed testing procedure with the existing methods. The results indicate that tests based on RSS and its extensions are superior alternatives to the entropy test based on SRS.  相似文献   

18.
A new class of Bayesian estimators for a proportion in multistage binomial designs is considered. Priors belong to the beta-J distribution family, which is derived from the Fisher information associated with the design. The transposition of the beta parameters of the Haldane and the uniform priors in fixed binomial experiments into the beta-J distribution yields bias-corrected versions of these priors in multistage designs. We show that the estimator of the posterior mean based on the corrected Haldane prior and the estimator of the posterior mode based on the corrected uniform prior have good frequentist properties. An easy-to-use approximation of the estimator of the posterior mode is provided. The new Bayesian estimators are compared to Whitehead's and the uniformly minimum variance estimators through several multistage designs. Last, the bias of the estimator of the posterior mode is derived for a particular case.  相似文献   

19.
The case-crossover design has been used by many researchers to study the transient effect of an exposure on the risk of a rare outcome. In a case-crossover design, only cases are sampled and each case will act as his/her own control. The time of failure acts as the case and non failure times act as the controls. Case-crossover designs have frequently been used to study the effect of environmental exposures on rare diseases or mortality. Time trends and seasonal confounding may be present in environmental studies and thus need to be controlled for by the sampling design. Several sampling methods are available for this purpose. In time-stratified sampling, disjoint strata of equal size are formed and the control times within the case stratum are used for comparison. The random semi-symmetric sampling design randomly selects a control time for comparison from two possible control times. The fixed semi-symmetric sampling design is a modified version of the random semi-symmetric sampling design that removes the random selection. Simulations show that the fixed semi-symmetric sampling design improves the variance of the random semi-symmetric sampling estimator by at least 35% for the exposures we studied. We derive expressions for the asymptotic variance of risk estimators for these designs, and show, that while the designs are not theoretically equivalent, in many realistic situations, the random semi-symmetric sampling design has similar efficiency to a time-stratified sampling design of size two and the fixed semi-symmetric sampling design has similar efficiency to a time-stratified sampling design of size three.  相似文献   

20.
Adaptive cluster sampling (ACS) is considered to be the most suitable sampling design for the estimation of rare, hidden, clustered and hard-to-reach population units. The main characteristic of this design is that it may select more meaningful samples and provide more efficient estimates for the field investigator as compare to the other conventional sampling designs. In this paper, we proposed a generalized estimator with a single auxiliary variable for the estimation of rare, hidden and highly clustered population variance under ACS design. The expressions of approximate bias and mean square error are derived and the efficiency comparisons have been made with other existing estimators. A numerical study is carried out on a real population of aquatic birds together with an artificial population generated by Poisson cluster process. Related results of numerical study show that the proposed generalized variance estimator is able to provide considerably better results over the competing estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号