首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
A procedure for constructing confounded designs for mixed factorial experiments derived from the Chinese Remainder Theorem is presented. The present procedure as well as others, all through use of modular arithmetic, are compared.  相似文献   

2.
A common strategy for avoiding information overload in multi-factor paired comparison experiments is to employ pairs of options which have different levels for only some of the factors in a study. For the practically important case where the factors fall into three groups such that all factors within a group have the same number of levels and where one is only interested in estimating the main effects, a comprehensive catalogue of D-optimal approximate designs is presented. These optimal designs use at most three different types of pairs and have a block diagonal information matrix.  相似文献   

3.
“Dispersion” effects are considered in addition to “Location” effects of factors in the inferential procedure of sequential factor screening experiments with m factors each at two levels under search linear models. Search designs in measuring "Dispersion" and "Location" effects of factors are presented for both stage one and stage two of factor screening experiments with 4 ≤ m ≤ 10.  相似文献   

4.
Simulation models often include a large number of input factors, many of them may be unimportant to the output; justifying the use of factor screening experiments to eliminate unimportant input factors from consideration in later stages of analysis. With a large number of factors, the challenge is designing experiments so that total number of runs and consequently the required time and cost decrease while achieving a satisfactory detection rate. This article employs frequency domain method (FDM) which is applicable in discrete-event simulation models to propose a new statistic defined as the ratio of estimated signal spectrum to maximum estimated noise spectrum. The proposed method not only has the FDM advantages compared to classic screening approaches but also helps to reduce the error of associated with distinguishing important effects from unimportant ones. Furthermore, as an alternative to the existing statistics, it is shown that not only the proposed statistic does not deteriorate the power of the screening test but in some instances it helps to improve it.  相似文献   

5.
Supersaturated designs are factorial designs in which the number of potential effects is greater than the run size. They are commonly used in screening experiments, with the aim of identifying the dominant active factors with low cost. However, an important research field, which is poorly developed, is the analysis of such designs with non-normal response. In this article, we develop a variable selection strategy, through the modification of the PageRank algorithm, which is commonly used in the Google search engine for ranking Webpages. The proposed method incorporates an appropriate information theoretical measure into this algorithm and as a result, it can be efficiently used for factor screening. A noteworthy advantage of this procedure is that it allows the use of supersaturated designs for analyzing discrete data and therefore a generalized linear model is assumed. As it is depicted via a thorough simulation study, in which the Type I and Type II error rates are computed for a wide range of underlying models and designs, the presented approach can be considered quite advantageous and effective.  相似文献   

6.
This paper concerns the problem of reconstructing images from noisy data by means of Bayesian classification methods. In Klein and Press, 1992, the authors presented a method for reconstructing images called Adaptive Bayesian Classification (ABC). The ABC procedure was shown to preform very well in simulation experiments. The ABC procedure was multistaged; moreover, it involved selecting a prior at Stage n that was the posterior at Stage n - 1. In this paper the authors show that we can improve upon ABC for some problems by modifying the way we take the prior at each stage. The new proposal is to take the prior for the pixel label at each stage as proportional to the number of pixels with that label in a small neighborhood of the pixel. The ABC procedure with a locally proportional prior (ABC/LPP) tends to improve upon the ABC procedure for some problems because the prior in the iterative portion of ABC/LPP is contextual, while that in ABC in non- contextual.  相似文献   

7.
Models with large parameter (i.e., hundreds or thousands of parameters) often behave as if they depend upon only a few parameters, with the rest having comparatively little influence. One challenge of sensitivity analysis with such models is screening parameters to identify the influential ones, and then characterizing their influences.

Large models often require significant computing resources to evaluate their output, and so a good screening mechanism should be efficient: it should minimize the number of times a model must be exercised. This paper describes an efficient procedure to perform sensitivity analysis on deterministic models with specified ranges or probability distributions for each parameter.

It is based on repeated exercising of the model, which can be treated as a black box. Statistical checks can ensure that the screening identified parameters that account for the bulk of the model variation. Subsequent sensitivity analysis can use the screening information to reduce the investment required to characterize the influence of influential and other parameters.

The procedure exploits simplifications in the dependence of a model output on model inputs. It works best where a small number of parameters are much more influential than all the rest. The method is much more sensitive to the number of influential parameters than to the total number of parameters. It is most effective when linear or quadratic effects dominate higher order effects and complex interactions.

The paper presents a set of M athematica functions that can be used to create a variety of types of experimental designs useful for sensitivity analysis, including simple random, latin hypercube and fractional factorial sampling. Each sampling method can use discretization, folding, grouping and replication to create composite designs. These techniques have beencombined in a composite approach called Iterated Fractional Factorial Design (IFFD).

The procedure is applied to model of nuclear fuel waste disposal, and to simplified example models to demonstrate the concepts involved.  相似文献   

8.
We propose a new procedure for detecting a patch of outliers or influential observations for autoregressive integrated moving average (ARIMA) model using local influence analysis. It is shown that the dependency aspects of time series data gives rise to masking or smearing effects when the local influence analysis is performed using current perturbation schemes. We suggest a new perturbation scheme to take into account the dependent structure of time series data, and employ the stepwise local influence method to give a diagnostic procedure. We show that the new perturbation scheme can avoid the smearing effects, and the stepwise technique of local influence can successfully deal with masking effects. Various simulation studies are performed to show the efficiency of proposed methodology and a real example is used for illustrations.  相似文献   

9.
A double-bootstrap confidence interval must usually be approximated by a Monte Carlo simulation, consisting of two nested levels of bootstrap sampling. We provide an analysis of the coverage accuracy of the interval which takes account of both the inherent bootstrap and Monte Carlo errors. The analysis shows that, by a suitable choice of the number of resamples drawn at the inner level of bootstrap sampling, we can reduce the order of coverage error. We consider also the effects of performing a finite Monte Carlo simulation on the mean length and variability of length of two-sided intervals. An adaptive procedure is presented for the choice of the number of inner level resamples. The effectiveness of the procedure is illustrated through a small simulation study.  相似文献   

10.
ABSTRACT

Supersaturated designs (SSDs) constitute a large class of fractional factorial designs which can be used for screening out the important factors from a large set of potentially active ones. A major advantage of these designs is that they reduce the experimental cost dramatically, but their crucial disadvantage is the confounding involved in the statistical analysis. Identification of active effects in SSDs has been the subject of much recent study. In this article we present a two-stage procedure for analyzing two-level SSDs assuming a main-effect only model, without including any interaction terms. This method combines sure independence screening (SIS) with different penalty functions; such as Smoothly Clipped Absolute Deviation (SCAD), Lasso and MC penalty achieving both the down-selection and the estimation of the significant effects, simultaneously. Insights on using the proposed methodology are provided through various simulation scenarios and several comparisons with existing approaches, such as stepwise in combination with SCAD and Dantzig Selector (DS) are presented as well. Results of the numerical study and real data analysis reveal that the proposed procedure can be considered as an advantageous tool due to its extremely good performance for identifying active factors.  相似文献   

11.
A method of calculating simultaneous one-sided confidence intervals for all ordered pairwise differences of the treatment effectsji, 1 i < j k, in a one-way model without any distributional assumptions is discussed. When it is known a priori that the treatment effects satisfy the simple ordering1k, these simultaneous confidence intervals offer the experimenter a simple way of determining which treatment effects may be declared to be unequal, and is more powerful than the usual two-sided Steel-Dwass procedure. Some exact critical points required by the confidence intervals are presented for k= 3 and small sample sizes, and other methods of critical point determination such as asymptotic approximation and simulation are discussed.  相似文献   

12.
Detection of outliers or influential observations is an important work in statistical modeling, especially for the correlated time series data. In this paper we propose a new procedure to detect patch of influential observations in the generalized autoregressive conditional heteroskedasticity (GARCH) model. Firstly we compare the performance of innovative perturbation scheme, additive perturbation scheme and data perturbation scheme in local influence analysis. We find that the innovative perturbation scheme give better result than other two schemes although this perturbation scheme may suffer from masking effects. Then we use the stepwise local influence method under innovative perturbation scheme to detect patch of influential observations and uncover the masking effects. The simulated studies show that the new technique can successfully detect a patch of influential observations or outliers under innovative perturbation scheme. The analysis based on simulation studies and two real data sets show that the stepwise local influence method under innovative perturbation scheme is efficient for detecting multiple influential observations and dealing with masking effects in the GARCH model.  相似文献   

13.
In many applications (geosciences, insurance, etc.), the peaks-over-thresholds (POT) approach is one of the most widely used methodology for extreme quantile inference. It mainly consists of approximating the distribution of exceedances above a high threshold by a generalized Pareto distribution (GPD). The number of exceedances which is used in the POT inference is often quite small and this leads typically to a high volatility of the estimates. Inspired by perfect sampling techniques used in simulation studies, we define a folding procedure that connects the lower and upper parts of a distribution. A new extreme quantile estimator motivated by this theoretical folding scheme is proposed and studied. Although the asymptotic behaviour of our new estimate is the same as the classical (non-folded) one, our folding procedure reduces significantly the mean squared error of the extreme quantile estimates for small and moderate samples. This is illustrated in the simulation study. We also apply our method to an insurance dataset.  相似文献   

14.
A method for constructing asymmetrical (mixed-level) designs, satisfying the balancing and interaction estimability requirements with a number of runs as small as possible, is proposed in this paper. The method, based on a heuristic procedure, uses a new optimality criterion formulated here. The proposed method demonstrates efficiency in terms of searching time and optimality of the attained designs. A complete collection of such asymmetrical designs with two- and three-level factors is available. A technological application is also presented.  相似文献   

15.
A generalization of the group screening technique for finding the non-negligible factors in a first order model is presented. In conventional screening designs, each factor is assigned to a single factor group, and the sum of effects associated with each group is estimated in the first stage. The new designs assign a factor to multiple groups in the first stage. An individual effect is estimated in the second stage only if each group to which the corresponding factor is assigned is “active” in the first stage. The performance of these designs is compared to conventional group screening designs for cases in which the direction of each factor, if active, is assumed known a priori and measurement error is negligible.  相似文献   

16.
Screening is the first stage of many industrial experiments and is used to determine efficiently and effectively a small number of potential factors among a large number of factors which may affect a particular response. In a recent paper, Jones and Nachtsheim [A class of three-level designs for definitive screening in the presence of second-order effects. J. Qual. Technol. 2011;43:1–15] have given a class of three-level designs for screening in the presence of second-order effects using a variant of the coordinate exchange algorithm as it was given by Meyer and Nachtsheim [The coordinate-exchange algorithm for constructing exact optimal experimental designs. Technometrics 1995;37:60–69]. Xiao et al. [Constructing definitive screening designs using conference matrices. J. Qual. Technol. 2012;44:2–8] have used conference matrices to construct definitive screening designs with good properties. In this paper, we propose a method for the construction of efficient three-level screening designs based on weighing matrices and their complete foldover. This method can be considered as a generalization of the method proposed by Xiao et al. [Constructing definitive screening designs using conference matrices. J. Qual. Technol. 2012;44:2–8]. Many new orthogonal three-level screening designs are constructed and their properties are explored. These designs are highly D-efficient and provide uncorrelated estimates of main effects that are unbiased by any second-order effect. Our approach is relatively straightforward and no computer search is needed since our designs are constructed using known weighing matrices.  相似文献   

17.
Variable selection in multiple linear regression models is considered. It is shown that for the special case of orthogonal predictor variables, an adaptive pre-test-type procedure proposed by Venter and Steel [Simultaneous selection and estimation for the some zeros family of normal models, J. Statist. Comput. Simul. 45 (1993), pp. 129–146] is almost equivalent to least angle regression, proposed by Efron et al. [Least angle regression, Ann. Stat. 32 (2004), pp. 407–499]. A new adaptive pre-test-type procedure is proposed, which extends the procedure of Venter and Steel to the general non-orthogonal case in a multiple linear regression analysis. This new procedure is based on a likelihood ratio test where the critical value is determined data-dependently. A practical illustration and results from a simulation study are presented.  相似文献   

18.
Recently, Ong and Mukerjee [Probability matching priors for two-sided tolerance intervals in balanced one-way and two-way nested random effects models. Statistics. 2011;45:403–411] developed two-sided Bayesian tolerance intervals, with approximate frequentist validity, for a future observation in balanced one-way and two-way nested random effects models. These were obtained using probability matching priors (PMP). On the other hand, Krishnamoorthy and Lian [Closed-form approximate tolerance intervals for some general linear models and comparison studies. J Stat Comput Simul. 2012;82:547–563] studied closed-form approximate tolerance intervals by the modified large-sample (MLS) approach. We compare the performances of these two approaches for normal as well as non-normal error distributions. Monte Carlo simulation methods are used to evaluate the resulting tolerance intervals with regard to achieved confidence levels and expected widths. It turns out that PMP tolerance intervals are less conservative for data with large number of classes and small number of observations per class and the MLS procedure is preferable for smaller sample sizes.  相似文献   

19.
The analysis of designs based on saturated orthogonal arrays poses a very difficult challenge since there are no degrees of freedom left to estimate the error variance. In this paper we propose a heuristic approach for the use of cumulative sum control chart for screening active effects in orthogonal-saturated experiments. A comparative simulation study establishes the powerfulness of the proposed method.  相似文献   

20.
A supersaturated design (SSD) is a design whose run size is not enough for estimating all the main effects. The goal in conducting such a design is to identify, presumably only a few, relatively dominant active effects with a cost as low as possible. However, data analysis of such designs remains primitive: traditional approaches are not appropriate in such a situation and several methods which were proposed in the literature in recent years are effective when used to analyze two-level SSDs. In this paper, we introduce a variable selection procedure, called the PLSVS method, to screen active effects in mixed-level SSDs based on the variable importance in projection which is an important concept in the partial least-squares regression. Simulation studies show that this procedure is effective.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号