首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models.  相似文献   

2.
In this paper, we propose the quick switching sampling system for assuring mean life of a product under time truncated life test where the lifetime of the product follows the Weibull distribution and the mean life is considered as the quality of the product. The optimal parameters of the proposed system are determined using two points on the operating characteristic curve approach for various combinations of consumer's risk and ratio of true mean life time and specified life time. Tables are constructed to determine the optimal parameters for specified acceptable quality level and limiting quality level along with the corresponding probabilities of acceptance. The proposed system is compared with other existing sampling plans under Weibull lifetime model. In addition, an economical design of the proposed system is also discussed.  相似文献   

3.
In this paper, a multivariate Bayesian variable sampling interval (VSI) control chart for the economic design and optimization of statistical parameters is designed. Based on the VSI sampling strategy of a multivariate Bayesian control chart with dual control limits, the optimal expected cost function is constructed. The proposed model allows the determination of the scheme parameters that minimize the expected cost per time of the process. The effectiveness of the Bayesian VSI chart is estimated through economic comparisons with the Bayesian fixed sampling interval and the Hotelling's T2 chart. This study is an in-depth study on a Bayesian multivariate control chart with variable parameter. Furthermore, it is shown that significant cost improvement may be realized through the new model.  相似文献   

4.
In this paper, a new mixed sampling plan based on the process capability index (PCI) Cpk is proposed and the resultant plan is called mixed variable lot-size chain sampling plan (ChSP). The proposed mixed plan comprises of both attribute and variables inspections. The variable lot-size sampling plan can be used for inspection of attribute quality characteristics and for the inspection of measurable quality characteristics, the variables ChSP based on PCI will be used. We have considered both symmetric and asymmetric fraction non conforming cases for the variables ChSP. Tables are developed for determining the optimal parameters of the proposed mixed plan based on two points on the operating characteristic (OC) approach. In order to construct the tables, the problem is formulated as a non linear programming where the average sample number function is considered as an objective function to be minimized and the lot acceptance probabilities at acceptable quality level and limiting quality level under the OC curve are considered as constraints. The practical implementation of the proposed mixed sampling plan is explained with an illustrative real time example. Advantages of the proposed sampling plan are also discussed in terms of comparison with other existing sampling plans.  相似文献   

5.
Our paper proposes a methodological strategy to select optimal sampling designs for phenotyping studies including a cocktail of drugs. A cocktail approach is of high interest to determine the simultaneous activity of enzymes responsible for drug metabolism and pharmacokinetics, therefore useful in anticipating drug–drug interactions and in personalized medicine. Phenotyping indexes, which are area under the concentration‐time curves, can be derived from a few samples using nonlinear mixed effect models and maximum a posteriori estimation. Because of clinical constraints in phenotyping studies, the number of samples that can be collected in individuals is limited and the sampling times must be as flexible as possible. Therefore to optimize joint design for several drugs (i.e., to determine a compromise between informative times that best characterize each drug's kinetics), we proposed to use a compound optimality criterion based on the expected population Fisher information matrix in nonlinear mixed effect models. This criterion allows weighting different models, which might be useful to take into account the importance accorded to each target in a phenotyping test. We also computed windows around the optimal times based on recursive random sampling and Monte‐Carlo simulation while maintaining a reasonable level of efficiency for parameter estimation. We illustrated this strategy for two drugs often included in phenotyping cocktails, midazolam (probe for CYP3A) and digoxin (P‐glycoprotein), based on the data of a previous study, and were able to find a sparse and flexible design. The obtained design was evaluated by clinical trial simulations and shown to be efficient for the estimation of population and individual parameters. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
A fast general extension algorithm of Latin hypercube sampling (LHS) is proposed, which reduces the time consumption of basic general extension and preserves the most original sampling points. The extension algorithm starts with an original LHS of size m and constructs a new LHS of size m?+?n that remains the original points. This algorithm is the further research of basic general extension, which cost too much time to get the new LHS. During selecting the original sampling points to preserve, time consumption is cut from three aspects. The first measure of the proposed algorithm is to select isolated vertices and divide the adjacent matrix into blocks. Secondly, the relationship of original LHS structure and new LHS structure is discussed. Thirdly, the upper and lower bounds help reduce the time consumption. The proposed algorithm is applied for two functions to demonstrate the effectiveness.  相似文献   

7.
The performance of nonparametric function estimates often depends on the choice of design points. Based on the mean integrated squared error criterion, we propose a sequential design procedure that updates the model knowledge and optimal design density sequentially. The methodology is developed under a general framework covering a wide range of nonparametric inference problems, such as conditional mean and variance functions, the conditional distribution function, the conditional quantile function in quantile regression, functional coefficients in varying coefficient models and semiparametric inferences. Based on our empirical studies, nonparametric inference based on the proposed sequential design is more efficient than the uniform design and its performance is close to the true but unknown optimal design. The Canadian Journal of Statistics 40: 362–377; 2012 © 2012 Statistical Society of Canada  相似文献   

8.
Abstract

Acceptance sampling plans are quality tools for the manufacturer and the customer. The ultimate result of reduction of nonconforming items will increase the profit of the manufacturer and enhance the satisfaction of the consumer. In this article, a mixed double sampling plan is proposed in which the attribute double sampling inspection is used in the first stage and a variables sampling plan based on the process capability index Cpk is used in the second stage. The optimal parameters are determined so that the producer’s and the consumer’s risks are to be satisfied with minimum average sample number. The optimal parameters of the proposed plan are estimated using different plan settings using two points on the operating characteristic curve approach. In designing the proposed mixed double sampling plan, we consider the symmetric and asymmetric nonconforming cases under variables inspection. The efficiency of the proposed plan is discussed and compared with the existing sampling plans. Tables are constructed for easy selection of the optimal plan parameters and an industrial example is also included for implementation of the proposed plan.  相似文献   

9.
In this article the problem of the optimal selection and allocation of time points in repeated measures experiments is considered. D‐ optimal designs for linear regression models with a random intercept and first order auto‐regressive serial correlations are computed numerically and compared with designs having equally spaced time points. When the order of the polynomial is known and the serial correlations are not too small, the comparison shows that for any fixed number of repeated measures, a design with equally spaced time points is almost as efficient as the D‐ optimal design. When, however, there is no prior knowledge about the order of the underlying polynomial, the best choice in terms of efficiency is a D‐ optimal design for the highest possible relevant order of the polynomial. A design with equally‐spaced time points is the second best choice  相似文献   

10.
In this article, we propose a novel approach to fit a functional linear regression in which both the response and the predictor are functions. We consider the case where the response and the predictor processes are both sparsely sampled at random time points and are contaminated with random errors. In addition, the random times are allowed to be different for the measurements of the predictor and the response functions. The aforementioned situation often occurs in longitudinal data settings. To estimate the covariance and the cross‐covariance functions, we use a regularization method over a reproducing kernel Hilbert space. The estimate of the cross‐covariance function is used to obtain estimates of the regression coefficient function and of the functional singular components. We derive the convergence rates of the proposed cross‐covariance, the regression coefficient, and the singular component function estimators. Furthermore, we show that, under some regularity conditions, the estimator of the coefficient function has a minimax optimal rate. We conduct a simulation study and demonstrate merits of the proposed method by comparing it to some other existing methods in the literature. We illustrate the method by an example of an application to a real‐world air quality dataset. The Canadian Journal of Statistics 47: 524–559; 2019 © 2019 Statistical Society of Canada  相似文献   

11.
In this paper, we consider a regression analysis for a missing data problem in which the variables of primary interest are unobserved under a general biased sampling scheme, an outcome‐dependent sampling (ODS) design. We propose a semiparametric empirical likelihood method for accessing the association between a continuous outcome response and unobservable interesting factors. Simulation study results show that ODS design can produce more efficient estimators than the simple random design of the same sample size. We demonstrate the proposed approach with a data set from an environmental study for the genetic effects on human lung function in COPD smokers. The Canadian Journal of Statistics 40: 282–303; 2012 © 2012 Statistical Society of Canada  相似文献   

12.
A typical model for geostatistical data when the observations are counts is the spatial generalised linear mixed model. We present a criterion for optimal sampling design under this framework which aims to minimise the error in the prediction of the underlying spatial random effects. The proposed criterion is derived by performing an asymptotic expansion to the conditional prediction variance. We argue that the mean of the spatial process needs to be taken into account in the construction of the predictive design, which we demonstrate through a simulation study where we compare the proposed criterion against the widely used space-filling design. Furthermore, our results are applied to the Norway precipitation data and the rhizoctonia disease data.  相似文献   

13.
Acceptance sampling is a quality assurance tool, which provides a rule for the producer and the consumer to make acceptance or rejection decision about a lot. This paper attempts to develop a more efficient sampling plan, variables repetitive group sampling plan, based on the total loss to the producer and consumer. To design this model, two constraints are considered to satisfy the opposing priorities and requirements of the producer and the consumer by using Acceptable quality level (AQL) and Limiting quality level (LQL) points on operating characteristic (OC) curve. The objective function of this model is constructed based on the total expected loss. In order to illustrate the application of the proposed model, an example is presented. In addition, the effects of process parameters on the optimal solution and the total expected loss are studied by performing a sensitivity analysis. Finally, the efficiency of the proposed model is compared with the variables single sampling plan, the variables double sampling plan and the repetitive group sampling plan of Balamurali and Jun (2006) in terms of average sample number, total expected loss and its difference with ideal OC curve.  相似文献   

14.
15.
In this paper, we study the bioequivalence (BE) inference problem motivated by pharmacokinetic data that were collected using the serial sampling technique. In serial sampling designs, subjects are independently assigned to one of the two drugs; each subject can be sampled only once, and data are collected at K distinct timepoints from multiple subjects. We consider design and hypothesis testing for the parameter of interest: the area under the concentration–time curve (AUC). Decision rules in demonstrating BE were established using an equivalence test for either the ratio or logarithmic difference of two AUCs. The proposed t-test can deal with cases where two AUCs have unequal variances. To control for the type I error rate, the involved degrees-of-freedom were adjusted using Satterthwaite's approximation. A power formula was derived to allow the determination of necessary sample sizes. Simulation results show that, when the two AUCs have unequal variances, the type I error rate is better controlled by the proposed method compared with a method that only handles equal variances. We also propose an unequal subject allocation method that improves the power relative to that of the equal and symmetric allocation. The methods are illustrated using practical examples.  相似文献   

16.
ABSTRACT

Optimal main effects plans (MEPs) and optimal foldover designs can often be performed as a series of nested optimal designs. Then, if the experiment cannot be completed due to time or budget constraints, the fraction already performed may still be an optimal design. We show that the optimal MEP for 4t factors in 4t + 4 points does not contain the optimal MEP for 4t factors in 4t + 2 points nested within it. In general, the optimal MEP for 4t factors in 4t + 4 points does not contain the optimal MEPs for 4t factors in 4t + 1, 4t + 2, or 4t + 3 points and the optimal MEP for 4t + 1 factors in 4t + 4 points does not contain the optimal MEPs for 4t + 1 factors in 4t + 2 or 4t + 3 points. We also show that the runs in an orthogonal design for 4t factors in 4t + 4 points, and the optimal foldover designs obtained by folding, should be performed in a certain sequence in order to avoid the possibility of a singular X'X matrix.  相似文献   

17.
We develop a hierarchical Gaussian process model for forecasting and inference of functional time series data. Unlike existing methods, our approach is especially suited for sparsely or irregularly sampled curves and for curves sampled with nonnegligible measurement error. The latent process is dynamically modeled as a functional autoregression (FAR) with Gaussian process innovations. We propose a fully nonparametric dynamic functional factor model for the dynamic innovation process, with broader applicability and improved computational efficiency over standard Gaussian process models. We prove finite-sample forecasting and interpolation optimality properties of the proposed model, which remain valid with the Gaussian assumption relaxed. An efficient Gibbs sampling algorithm is developed for estimation, inference, and forecasting, with extensions for FAR(p) models with model averaging over the lag p. Extensive simulations demonstrate substantial improvements in forecasting performance and recovery of the autoregressive surface over competing methods, especially under sparse designs. We apply the proposed methods to forecast nominal and real yield curves using daily U.S. data. Real yields are observed more sparsely than nominal yields, yet the proposed methods are highly competitive in both settings. Supplementary materials, including R code and the yield curve data, are available online.  相似文献   

18.
The economic and statistical merits of a multiple variable sampling intervals scheme are studied. The problem is formulated as a double-objective optimization problem with the adjusted average time to signal as the statistical objective and the expected cost per hour as the economic objective. Bai and Lee's [An economic design of variable sampling interval ¯X control charts. Int J Prod Econ. 1998;54:57–64] economic model is considered. Then we find the Pareto-optimal designs in which the two objectives are minimized simultaneously by using the non-dominated sorting genetic algorithm. Through an illustrative example, the advantages of the proposed approach are shown by providing a list of viable optimal solutions and graphical representations, which indicate the advantage of flexibility and adaptability of our approach.  相似文献   

19.
The use of robust measures helps to increase the precision of the estimators, especially for the estimation of extremely skewed distributions. In this article, a generalized ratio estimator is proposed by using some robust measures with single auxiliary variable under the adaptive cluster sampling (ACS) design. We have incorporated tri-mean (TM), mid-range (MR) and Hodges-Lehman (HL) of the auxiliary variable as robust measures together with some conventional measures. The expressions of bias and mean square error (MSE) of the proposed generalized ratio estimator are derived. Two types of numerical study have been conducted using artificial clustered population and real data application to examine the performance of the proposed estimator over the usual mean per unit estimator under simple random sampling (SRS). Related results of the simulation study show that the proposed estimators provide better estimation results on both real and artificial population over the competing estimators.  相似文献   

20.
We consider optimal designs for a class of symmetric models for binary data which includes the common probit and logit models. We show that for a large group of optimality criteria which includes the main ones in the literature (e.g. A-, D-, E-, F- and G-optimality) the optimal design for our class of models is a two-point design with support points symmetrically placed about the ED50 but with possibly unequal weighting. We demonstrate how one can further reduce the problem to a one-variable optimization by characterizing various of the common criteria. We also use the results to demonstrate major qualitative differences between the F - and c-optimal designs, two design criteria which have similar motivation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号