首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Results in five areas of survey sampling dealing with the choice of the sampling design are reviewed. In Section 2, the results and discussions surrounding the purposive selection methods suggested by linear regression superpopulation models are reviewed. In Section 3, similar models to those in the previous section are considered; however, random sampling designs are considered and attention is focused on the optimal choice of πj. Then in Section 4, systematic sampling methods obtained under autocorrelated superpopulation models are reviewed. The next section examines minimax sampling designs. The work in the final section is based solely on the randomization. In Section 6 methods of sample selection which yield inclusion probabilities πj = n/N and πij = n(n - 1)/N(N - 1), but for which there are fewer than NCn possible samples, are mentioned briefly.  相似文献   

2.
If the population size is not a multiple of the sample size, then the usual linear systematic sampling design is unattractive, since the sample size obtained will either vary, or be constant and different to the required sample size. Only a few modified systematic sampling designs are known to deal with this problem and in the presence of linear trend, most of these designs do not provide favorable results. In this paper, a modified systematic sampling design, known as remainder modified systematic sampling (RMSS), is introduced. There are seven cases of RMSS and the results in this paper suggest that the proposed design is favorable, regardless of each case, while providing linear trend-free sampling results for three of the seven cases. To obtain linear trend-free sampling for the other cases and thus improve results, an end corrections estimator is constructed.  相似文献   

3.
Pharmacokinetic studies are commonly performed using the two-stage approach. The first stage involves estimation of pharmacokinetic parameters such as the area under the concentration versus time curve (AUC) for each analysis subject separately, and the second stage uses the individual parameter estimates for statistical inference. This two-stage approach is not applicable in sparse sampling situations where only one sample is available per analysis subject similar to that in non-clinical in vivo studies. In a serial sampling design, only one sample is taken from each analysis subject. A simulation study was carried out to assess coverage, power and type I error of seven methods to construct two-sided 90% confidence intervals for ratios of two AUCs assessed in a serial sampling design, which can be used to assess bioequivalence in this parameter.  相似文献   

4.
This paper examines strategies for estimating the mean of a finite population in the following situation: A linear regression model is assumed to describe the population scatter. Various estimators β for the vector of regression parameters β are considered. Several ways of transforming each estimator β into a model-based estimator for the population mean are considered. Some estimators constructed in this way become sensitive to correctness of the assumed model. The estimators favoured in this paper are the ones in which the observations are weighted to reflect the sampling design, so that asymptotic design unbiasedness is achieved. For these estimators, the randomization distribution gives protection against model breakdown.  相似文献   

5.
ABSTRACT

In this paper, we propose a sampling design termed as multiple-start balanced modified systematic sampling (MBMSS), which involves the supplementation of two or more balanced modified systematic samples, thus permitting us to obtain an unbiased estimate of the associated sampling variance. There are five cases for this design and in the presence of linear trend only one of these cases is optimal. To further improve results for the other cases, we propose an estimator that removes linear trend by applying weights to the first and last sampling units of the selected balanced modified systematic samples and is thus termed as the MBMSS with end corrections (MBMSSEC) estimator. By assuming a linear trend model averaged over a super-population model, we will compare the expected mean square errors (MSEs) of the proposed sample means, to that of simple random sampling (SRS), linear systematic sampling (LSS), stratified random sampling (STR), multiple-start linear systematic sampling (MLSS), and other modified MLSS estimators. As a result, MBMSS is optimal for one of the five possible cases, while the MBMSSEC estimator is preferred for three of the other four cases.  相似文献   

6.
Equally spaced designs are compared using the generalized variance as a measure of efficiency. Results for polynomial models are derived on the increased efficiency arising from increasing the number of design points when the regions are fixed and when the regions are expanded. The effects of dependence among the observations on these results are studied by considering a particular family of stationary correlated error structures.  相似文献   

7.
A common strategy for avoiding information overload in multi-factor paired comparison experiments is to employ pairs of options which have different levels for only some of the factors in a study. For the practically important case where the factors fall into three groups such that all factors within a group have the same number of levels and where one is only interested in estimating the main effects, a comprehensive catalogue of D-optimal approximate designs is presented. These optimal designs use at most three different types of pairs and have a block diagonal information matrix.  相似文献   

8.
The primary objective of an oncology dose-finding trial for novel therapies, such as molecularly targeted agents and immune-oncology therapies, is to identify the optimal dose (OD) that is tolerable and therapeutically beneficial for subjects in subsequent clinical trials. Pharmacokinetic (PK) information is considered an appropriate indicator for evaluating the level of drug intervention in humans from a pharmacological perspective. Several novel anticancer agents have been shown to have significant exposure-efficacy relationships, and some PK information has been considered an important predictor of efficacy. This paper proposes a Bayesian optimal interval design for dose optimization with a randomization scheme based on PK outcomes in oncology. A simulation study shows that the proposed design has advantages compared to the other designs in the percentage of correct OD selection and the average number of patients allocated to OD in various realistic settings.  相似文献   

9.
10.
11.
12.
Kupper and Meydrech and Myers and Lahoda introduced the mean squared error (MSE) approach to study response surface designs, Duncan and DeGroot derived a criterion for optimality of linear experimental designs based on minimum mean squared error. However, minimization of the MSE of an estimator maxr renuire some knowledge about the unknown parameters. Without such knowledge construction of designs optimal in the sense of MSE may not be possible. In this article a simple method of selecting the levels of regressor variables suitable for estimating some functions of the parameters of a lognormal regression model is developed using a criterion for optimality based on the variance of an estimator. For some special parametric functions, the criterion used here is equivalent to the criterion of minimizing the mean squared error. It is found that the maximum likelihood estimators of a class of parametric functions can be improved substantially (in the sense of MSE) by proper choice of the values of regressor variables. Moreover, our approach is applicable to analysis of variance as well as regression designs.  相似文献   

13.
The economic and statistical merits of a multiple variable sampling intervals scheme are studied. The problem is formulated as a double-objective optimization problem with the adjusted average time to signal as the statistical objective and the expected cost per hour as the economic objective. Bai and Lee's [An economic design of variable sampling interval ¯X control charts. Int J Prod Econ. 1998;54:57–64] economic model is considered. Then we find the Pareto-optimal designs in which the two objectives are minimized simultaneously by using the non-dominated sorting genetic algorithm. Through an illustrative example, the advantages of the proposed approach are shown by providing a list of viable optimal solutions and graphical representations, which indicate the advantage of flexibility and adaptability of our approach.  相似文献   

14.
A standard two-arm randomised controlled trial usually compares an intervention to a control treatment with equal numbers of patients randomised to each treatment arm and only data from within the current trial are used to assess the treatment effect. Historical data are used when designing new trials and have recently been considered for use in the analysis when the required number of patients under a standard trial design cannot be achieved. Incorporating historical control data could lead to more efficient trials, reducing the number of controls required in the current study when the historical and current control data agree. However, when the data are inconsistent, there is potential for biased treatment effect estimates, inflated type I error and reduced power. We introduce two novel approaches for binary data which discount historical data based on the agreement with the current trial controls, an equivalence approach and an approach based on tail area probabilities. An adaptive design is used where the allocation ratio is adapted at the interim analysis, randomising fewer patients to control when there is agreement. The historical data are down-weighted in the analysis using the power prior approach with a fixed power. We compare operating characteristics of the proposed design to historical data methods in the literature: the modified power prior; commensurate prior; and robust mixture prior. The equivalence probability weight approach is intuitive and the operating characteristics can be calculated exactly. Furthermore, the equivalence bounds can be chosen to control the maximum possible inflation in type I error.  相似文献   

15.
Umbrella trials are an innovative trial design where different treatments are matched with subtypes of a disease, with the matching typically based on a set of biomarkers. Consequently, when patients can be positive for more than one biomarker, they may be eligible for multiple treatment arms. In practice, different approaches could be applied to allocate patients who are positive for multiple biomarkers to treatments. However, to date there has been little exploration of how these approaches compare statistically. We conduct a simulation study to compare five approaches to handling treatment allocation in the presence of multiple biomarkers – equal randomisation; randomisation with fixed probability of allocation to control; Bayesian adaptive randomisation (BAR); constrained randomisation; and hierarchy of biomarkers. We evaluate these approaches under different scenarios in the context of a hypothetical phase II biomarker-guided umbrella trial. We define the pairings representing the pre-trial expectations on efficacy as linked pairs, and the other biomarker-treatment pairings as unlinked. The hierarchy and BAR approaches have the highest power to detect a treatment-biomarker linked interaction. However, the hierarchy procedure performs poorly if the pre-specified treatment-biomarker pairings are incorrect. The BAR method allocates a higher proportion of patients who are positive for multiple biomarkers to promising treatments when an unlinked interaction is present. In most scenarios, the constrained randomisation approach best balances allocation to all treatment arms. Pre-specification of an approach to deal with treatment allocation in the presence of multiple biomarkers is important, especially when overlapping subgroups are likely.  相似文献   

16.
Industrial statistics plays a major role in the areas of both quality management and innovation. However, existing methodologies must be integrated with the latest tools from the field of Artificial Intelligence. To this end, a background on the joint application of Design of Experiments (DOE) and Machine Learning (ML) methodologies in industrial settings is presented here, along with a case study from the chemical industry. A DOE study is used to collect data, and two ML models are applied to predict responses which performance show an advantage over the traditional modeling approach. Emphasis is placed on causal investigation and quantification of prediction uncertainty, as these are crucial for an assessment of the goodness and robustness of the models developed. Within the scope of the case study, the models learned can be implemented in a semi-automatic system that can assist practitioners who are inexperienced in data analysis in the process of new product development.  相似文献   

17.
This article considers constructing confidence intervals for the date of a structural break in linear regression models. Using extensive simulations, we compare the performance of various procedures in terms of exact coverage rates and lengths of the confidence intervals. These include the procedures of Bai (1997 Bai, J. (1997). Estimation of a change point in multiple regressions. Review of Economics and Statistics 79:551563.[Crossref], [Web of Science ®] [Google Scholar]) based on the asymptotic distribution under a shrinking shift framework, Elliott and Müller (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) based on inverting a test locally invariant to the magnitude of break, Eo and Morley (2015 Eo, Y., Morley, J. (2015). Likelihood-ratio-based confidence sets for the timing of structural breaks. Quantitative Economics 6:463497.[Crossref], [Web of Science ®] [Google Scholar]) based on inverting a likelihood ratio test, and various bootstrap procedures. On the basis of achieving an exact coverage rate that is closest to the nominal level, Elliott and Müller's (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) approach is by far the best one. However, this comes with a very high cost in terms of the length of the confidence intervals. When the errors are serially correlated and dealing with a change in intercept or a change in the coefficient of a stationary regressor with a high signal-to-noise ratio, the length of the confidence interval increases and approaches the whole sample as the magnitude of the change increases. The same problem occurs in models with a lagged dependent variable, a common case in practice. This drawback is not present for the other methods, which have similar properties. Theoretical results are provided to explain the drawbacks of Elliott and Müller's (2007 Elliott, G., Müller, U. (2007). Confidence sets for the date of a single break in linear time series regressions. Journal of Econometrics 141:11961218.[Crossref], [Web of Science ®] [Google Scholar]) method.  相似文献   

18.
An approach to the analysis of time-dependent ordinal quality score data from robust design experiments is developed and applied to an experiment from commercial horticultural research, using concepts of product robustness and longevity that are familiar to analysts in engineering research. A two-stage analysis is used to develop models describing the effects of a number of experimental treatments on the rate of post-sales product quality decline. The first stage uses a polynomial function on a transformed scale to approximate the quality decline for an individual experimental unit using derived coefficients and the second stage uses a joint mean and dispersion model to investigate the effects of the experimental treatments on these derived coefficients. The approach, developed specifically for an application in horticulture, is exemplified with data from a trial testing ornamental plants that are subjected to a range of treatments during production and home-life. The results of the analysis show how a number of control and noise factors affect the rate of post-production quality decline. Although the model is used to analyse quality data from a trial on ornamental plants, the approach developed is expected to be more generally applicable to a wide range of other complex production systems.  相似文献   

19.
In the context of estimating regression coefficients of an ill-conditioned binary logistic regression model, we develop a new biased estimator having two parameters for estimating the regression vector parameter β when it is subjected to lie in the linear subspace restriction Hβ = h. The matrix mean squared error and mean squared error (MSE) functions of these newly defined estimators are derived. Moreover, a method to choose the two parameters is proposed. Then, the performance of the proposed estimator is compared to that of the restricted maximum likelihood estimator and some other existing estimators in the sense of MSE via a Monte Carlo simulation study. According to the simulation results, the performance of the estimators depends on the sample size, number of explanatory variables, and degree of correlation. The superiority region of our proposed estimator is identified based on the biasing parameters, numerically. It is concluded that the new estimator is superior to the others in most of the situations considered and it is recommended to the researchers.  相似文献   

20.
Recent studies have shown that using variable sampling size and control limits (VSSC) schemes result in charts with more statistical power than variable sampling size (VSS) when detecting small to moderate shifts in the process mean vector. This paper presents an economic-statistical design (ESD) of the VSSC T2 control chart using the general model of Lorenzen and Vance [22]. The genetic algorithm approach is then employed to search for the optimal values of the six test parameters of the chart. We then compare the expected cost per unit of time of the optimally designed VSSC chart with optimally designed VSS and FRS (fixed ratio sampling) T2 charts as well as MEWMA charts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号