首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 16 毫秒
1.
Selective assembly is an effective approach for improving the quality of a product which is composed of two mating components. This article studies optimal partitioning of the dimensional distributions of the components in selective assembly. It extends previous results for squared error loss function to cover general convex loss functions, including asymmetric convex loss functions. Equations for the optimal partition are derived. Assuming that the density function of the dimensional distribution is log-concave, uniqueness of solutions is established and some properties of the optimal partition are shown. Some numerical results compare the optimal partition with some heuristic partitioning schemes.  相似文献   

2.
Selective assembly is an effective approach for improving a quality of a product assembled from two types of components, when the quality characteristic is the clearance between the mating components. Mease et al. (2004 Mease , D. , Nair , V. N. , Sudjianto , A. ( 2004 ). Selective assembly in manufacturing: statistical issues and optimal binning strategies . Technometrics 46 : 165175 .[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) have extensively studied optimal binning strategies under squared error loss in selective assembly, especially for the case when two types of component dimensions are identically distributed. However, the presence of measurement error in component dimensions has not been addressed. Here we study optimal binning strategies under squared error loss when measurement error is present. We give the equations for the optimal partition limits minimizing expected squared error loss, and show that the solution to them is unique when the component dimensions and the measurement errors are normally distributed. We then compare the expected losses of the optimal binning strategies with and without measurement error for normal distribution, and also evaluate the influence of the measurement error.  相似文献   

3.
A long-standing problem in clinical research is distinguishing drug treated subjects that respond due to specific effects of the drug from those that respond to non-specific (or placebo) effects of the treatment. Linear mixed effect models are commonly used to model longitudinal clinical trial data. In this paper we present a solution to the problem of identifying placebo responders using an optimal partitioning methodology for linear mixed effects models. Since individual outcomes in a longitudinal study correspond to curves, the optimal partitioning methodology produces a set of prototypical outcome profiles. The optimal partitioning methodology can accommodate both continuous and discrete covariates. The proposed partitioning strategy is compared and contrasted with the growth mixture modelling approach. The methodology is applied to a two-phase depression clinical trial where subjects in a first phase were treated openly for 12 weeks with fluoxetine followed by a double blind discontinuation phase where responders to treatment in the first phase were randomized to either stay on fluoxetine or switched to a placebo. The optimal partitioning methodology is applied to the first phase to identify prototypical outcome profiles. Using time to relapse in the second phase of the study, a survival analysis is performed on the partitioned data. The optimal partitioning results identify prototypical profiles that distinguish whether subjects relapse depending on whether or not they stay on the drug or are randomized to a placebo.  相似文献   

4.
In survival analysis, it is routine to test equality of two survival curves, which is often conducted by using the log-rank test. Although it is optimal under the proportional hazards assumption, the log-rank test is known to have little power when the survival or hazard functions cross. To test the overall homogeneity of hazard rate functions, we propose a group of partitioned log-rank tests. By partitioning the time axis and taking the supremum of the sum of two partitioned log-rank statistics over different partitioning points, the proposed test gains enormous power for cases with crossing hazards. On the other hand, when the hazards are indeed proportional, our test still maintains high power close to that of the optimal log-rank test. Extensive simulation studies are conducted to compare the proposed test with existing methods, and three real data examples are used to illustrate the commonality of crossing hazards and the advantages of the partitioned log-rank tests.  相似文献   

5.
The methods developed by John and Draper et al. of partitioning the blends (runs) of four mixture components into two or more orthogonal blocks when fitting quadratic models are extended to mixtures of five components. The characteristics of Latin squares of side five are used to derive rules for reliably and quickly obtaining designs with specific properties. The designs also produce orthogonal blocks when higher order models are fitted.  相似文献   

6.
The correspondence analysis (CA) method appears to be an effective tool for analysis of interrelations between rows and columns in two-way contingency data. A discrete version of the method, box clustering, is developed in the paper using an approximation version of the CA model extended to the case when CA factor values are required to be Boolean. Several properties of the proposed SEFIT-BOX algorithm are proved to facilitate interpretation of its output. It is also shown that two known partitioning algorithms (applied within row or column sets only) could be considered as locally optimal algorithms for fitting the model, and extensions of these algorithms to a simultaneous row and column partitioning problem are proposed.  相似文献   

7.
Abstract.  A common statistical problem involves the testing of a K -dimensional parameter vector. In both parametric and semiparametric settings, two types of directional tests – linear combination and constrained tests – are frequently used instead of omnibus tests in hopes of achieving greater power for specific alternatives. In this paper, we consider the relationship between these directional tests, as well as their relationship to omnibus tests. Every constrained directional test is shown to be asymptotically equivalent to a specific linear combination test under a sequence of contiguous alternatives and vice versa. Even when the direction of the alternative is known, the constrained test in general will not be optimal unless the objective function used to derive it is efficient. For an arbitrary alternative, insight into the power characteristics of directional tests in comparison to omnibus tests can be gained by a chi-square partition of the omnibus test.  相似文献   

8.
In response surface methodology, one is usually interested in estimating the optimal conditions based on a small number of experimental runs which are designed to optimally sample the experimental space. Typically, regression models are constructed from the experimental data and interrogated in order to provide a point estimate of the independent variable settings predicted to optimize the response. Unfortunately, these point estimates are rarely accompanied with uncertainty intervals. Though classical frequentist confidence intervals can be constructed for unconstrained quadratic models, higher order, constrained or nonlinear models are often encountered in practice. Existing techniques for constructing uncertainty estimates in such situations have not been implemented widely, due in part to the need to set adjustable parameters or because of limited or difficult applicability to constrained or nonlinear problems. To address these limitations a Bayesian method of determining credible intervals for response surface optima was developed. The approach shows good coverage probabilities on two test problems, is straightforward to implement and is readily applicable to the kind of constrained and/or nonlinear problems that frequently appear in practice.  相似文献   

9.
Smoothing Splines and Shape Restrictions   总被引:2,自引:0,他引:2  
Constrained smoothing splines are discussed under order restrictions on the shape of the function m . We consider shape constraints of the type m ( r )≥ 0, i.e. positivity, monotonicity, convexity, .... (Here for an integer r ≥ 0, m ( r ) denotes the r th derivative of m .) The paper contains three results: (1) constrained smoothing splines achieve optimal rates in shape restricted Sobolev classes; (2) they are equivalent to two step procedures of the following type: (a) in a first step the unconstrained smoothing spline is calculated; (b) in a second step the unconstrained smoothing spline is "projected" onto the constrained set. The projection is calculated with respect to a Sobolev-type norm; this result can be used for two purposes, it may motivate new algorithmic approaches and it helps to understand the form of the estimator and its asymptotic properties; (3) the infinite number of constraints can be replaced by a finite number with only a small loss of accuracy, this is discussed for estimation of a convex function.  相似文献   

10.
In this paper we consider the problem of optimal allocation of a redundant component in the case of series and parallel systems of two components when all the components are dependent. Whereas this problem has been extensively treated for the case of independent components, the case of dependent components has not received too much attention. In this paper we show that for this problem the main tools are the joint stochastic orders introduced by Shanthikumar and Yao (1991).  相似文献   

11.
Distributed agent-based simulation is a popular method to realize computational experiment on large-scale artificial society. Meanwhile, the partitioning strategy of the artificial society models among hosts is playing an essential role for simulation engine to offer high execution efficiency as it has great impact on the communication overheads and computational load-balancing during simulation. Aiming at the problem, we firstly analyze the execution and scheduling process of agents during simulation and model it as wide-sense cyclostationary random process. Then, a static statistical partitioning model is proposed to obtain the optimal partitioning strategy with minimum average communication cost and load imbalance factor. To solve the static statistical partitioning model, this paper turns it into a graph-partitioning problem. A statistical movement graph-based partitioning algorithm is then devised which generates task graph model by mining the statistical movement information from initialization data of simulation model. In the experiments, two other popular partitioning methods are used to evaluate the performance of proposed graph-partitioning algorithm. Furthermore, this paper compares the graph-partitioning performance under different task graph model. The results indicate that our proposed statistical movement graph-based static partitioning method outperforms all other methods in reducing the communication overhead while satisfying the load balance constraint.  相似文献   

12.
For paired comparison experiments involving pairs of multifactor options differing in a specified number of factors the problem of finding optimal designs is considered, when only main effects are to be estimated. It is presumed that the set of factors can be partitioned into two groups such that the number of levels is constant within each group. The optimal designs for this frequently encountered case are also optimal for the corresponding choice experiments under the hypothesis that the parameters in the multinomial logit model are equal to zero.  相似文献   

13.
Registration of temporal observations is a fundamental problem in functional data analysis. Various frameworks have been developed over the past two decades where registrations are conducted based on optimal time warping between functions. Comparison of functions solely based on time warping, however, may have limited application, in particular when certain constraints are desired in the registration. In this paper, we study registration with norm-preserving constraint. A closely related problem is on signal estimation, where the goal is to estimate the ground-truth template given random observations with both compositional and additive noises. We propose to adopt the Fisher–Rao framework to compute the underlying template, and mathematically prove that such framework leads to a consistent estimator. We then illustrate the constrained Fisher–Rao registration using simulations as well as two real data sets. It is found that the constrained method is robust with respect to additive noise and has superior alignment and classification performance to conventional, unconstrained registration methods.  相似文献   

14.
It is often the case in mixture experiments that some of the ingredients, such as additives or flavourings, are included with proportions constrained to lie in a restricted interval, while the majority of the mixture is made up of a particular ingredient used as a filler. The experimental region in such cases is restricted to a parallelepiped in or near one corner of the full simplex region. In this paper, orthogonally blocked designs with two experimental blends on each edge of the constrained region are considered for mixture experiments with three and four ingredients. The optimal symmetric orthogonally blocked designs within this class are determined and it is shown that even better designs are obtained for the asymmetric situation, in which some experimental blends are taken at the vertices of the experimental region. Some examples are given to show how these ideas may be extended to identify good designs in three and four blocks. Finally, an example is included to illustrate how to overcome the problems of collinearity that sometimes occur when fitting quadratic models to experimental data from mixture experiments in which some of the ingredient proportions are restricted to small values.  相似文献   

15.
The analysis of high-dimensional data often begins with the identification of lower dimensional subspaces. Principal component analysis is a dimension reduction technique that identifies linear combinations of variables along which most variation occurs or which best “reconstruct” the original variables. For example, many temperature readings may be taken in a production process when in fact there are just a few underlying variables driving the process. A problem with principal components is that the linear combinations can seem quite arbitrary. To make them more interpretable, we introduce two classes of constraints. In the first, coefficients are constrained to equal a small number of values (homogeneity constraint). The second constraint attempts to set as many coefficients to zero as possible (sparsity constraint). The resultant interpretable directions are either calculated to be close to the original principal component directions, or calculated in a stepwise manner that may make the components more orthogonal. A small dataset on characteristics of cars is used to introduce the techniques. A more substantial data mining application is also given, illustrating the ability of the procedure to scale to a very large number of variables.  相似文献   

16.
In this paper we consider the worst-case adaptive complexity of the search problem , where is also the set of independent sets of a matroid over S. We give a formula for the number of questions needed and an algorithm to find the optimal search algorithm for any matroid. This algorithm uses only O(|S|3) steps (i.e. questions to the independence oracle). This is also the length of Edmonds’ partitioning algorithm for matroids, which does not seem to be avoidable.  相似文献   

17.
Abstract

Two mixed models exist in analysis of two-way factorial ANOVA with mixed effects and interactions: the constrained and unconstrained models. The constrained model is unfavored because there is no convincing rationale for the enforced constraints on its random interactions and a lack of clear interpretation about its variance components. The purpose of this study is to further explore the relationship between these two models. We reveal some nice features of the constrained model on partition of the responsive variance. An alternative formulation of ANOVA that follows from this exploration is also presented.  相似文献   

18.
MIL-STD-1235C establishes standard procedures for the selection and implementation of single- and multi-level continuous sampling plans, such as CSP-1, CSP-F, CSP-2, CSP-T and CSP-V. CSP-V is a single-level continuous sampling procedure which provides for alternating sequences of 100%inspection (either at full or reduced clearance number) and sampling inspection. It requires a return to 100% inspection whenever a non-confirming unit is found during sampling inspection, but provides for a reduced clearance number upon demonstration of superior product quality. The CSP-V procedure serves as an alternative to the CSP-T procedure where a reduction in sampling frequency has no economic merit. In this paper, expressions for the average outgoing quality, the average fraction inspected and the operating characteristic function are derived using a Markov chain model. Four tables are given to enable the selection of CSP-V plans when the acceptable quality level or the limiting quality level and the

average outgoing quality limit are specified.  相似文献   

19.
Two types of symmetry can arise when the proportions of mixture components are constrained by upper and lower bounds. These two types of symmetry are shown to be useful for blocking first-order designs, as well as for finding the centroid of the experimental region. Orthogonal blocking of first-order mixture designs provides a method of including process variables in the mixture experiment, with the mixture terms orthogonal to the process factors. Symmetric regions are used to develop spherical and rotatable response surface designs for mixtures. The central composite design and designs based on the icosahedron and the dodecahedron are given for four-component mixtures. The uniform shell designs are three-level designs when applied to mixture experiments.  相似文献   

20.
ABSTRACT

A vast majority of the literature on the design of sampling plans by variables assumes that the distribution of the quality characteristic variable is normal, and that only its mean varies while its variance is known and remains constant. But, for many processes, the quality variable is nonnormal, and also either one or both of the mean and the variance of the variable can vary randomly. In this paper, an optimal economic approach is developed for design of plans for acceptance sampling by variables having Inverse Gaussian (IG) distributions. The advantage of developing an IG distribution based model is that it can be used for diverse quality variables ranging from highly skewed to almost symmetrical. We assume that the process has two independent assignable causes, one of which shifts the mean of the quality characteristic variable of a product and the other shifts the variance. Since a product quality variable may be affected by any one or both of the assignable causes, three different likely cases of shift (mean shift only, variance shift only, and both mean and variance shift) have been considered in the modeling process. For all of these likely scenarios, mathematical models giving the cost of using a variable acceptance sampling plan are developed. The cost models are optimized in selecting the optimal sampling plan parameters, such as the sample size, and the upper and lower acceptance limits. A large set of numerical example problems is solved for all the cases. Some of these numerical examples are also used in depicting the consequences of: 1) using the assumption that the quality variable is normally distributed when the true distribution is IG, and 2) using sampling plans from the existing standards instead of the optimal plans derived by the methodology developed in this paper. Sensitivities of some of the model input parameters are also studied using the analysis of variance technique. The information obtained on the parameter sensitivities can be used by the model users on prudently allocating resources for estimation of input parameters.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号