首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
It is shown how to use fractional replication in simulation studies. Examples are given. Considerable savings in number of runs required can be achieved through the use of fractional replication ideas.  相似文献   

2.
Models with large parameter (i.e., hundreds or thousands of parameters) often behave as if they depend upon only a few parameters, with the rest having comparatively little influence. One challenge of sensitivity analysis with such models is screening parameters to identify the influential ones, and then characterizing their influences.

Large models often require significant computing resources to evaluate their output, and so a good screening mechanism should be efficient: it should minimize the number of times a model must be exercised. This paper describes an efficient procedure to perform sensitivity analysis on deterministic models with specified ranges or probability distributions for each parameter.

It is based on repeated exercising of the model, which can be treated as a black box. Statistical checks can ensure that the screening identified parameters that account for the bulk of the model variation. Subsequent sensitivity analysis can use the screening information to reduce the investment required to characterize the influence of influential and other parameters.

The procedure exploits simplifications in the dependence of a model output on model inputs. It works best where a small number of parameters are much more influential than all the rest. The method is much more sensitive to the number of influential parameters than to the total number of parameters. It is most effective when linear or quadratic effects dominate higher order effects and complex interactions.

The paper presents a set of M athematica functions that can be used to create a variety of types of experimental designs useful for sensitivity analysis, including simple random, latin hypercube and fractional factorial sampling. Each sampling method can use discretization, folding, grouping and replication to create composite designs. These techniques have beencombined in a composite approach called Iterated Fractional Factorial Design (IFFD).

The procedure is applied to model of nuclear fuel waste disposal, and to simplified example models to demonstrate the concepts involved.  相似文献   

3.
We investigate the problem of regression from multiple reproducing kernel Hilbert spaces by means of orthogonal greedy algorithm. The greedy algorithm is appealing as it uses a small portion of candidate kernels to represent the approximation of regression function, and can greatly reduce the computational burden of traditional multi-kernel learning. Satisfied learning rates are obtained based on the Rademacher chaos complexity and data dependent hypothesis spaces.  相似文献   

4.
In this article, we consider experimental situations where a blocked regular two-level fractional factorial initial design is used. We investigate the use of the semi-fold technique as a follow-up strategy for de-aliasing effects that are confounded in the initial design as well as an alternative method for constructing blocked fractional factorial designs. A construction method is suggested based on the full foldover technique and sufficient conditions are obtained when the semi-fold yields as many estimable effects as the full foldover.  相似文献   

5.
In many applications, it is of interest to simultaneously model the mean and variance of a response when no replication exists. Modeling the mean and variance simultaneously is commonly referred to as dual modeling. Parametric approaches to dual modeling are popular when the underlying mean and variance functions can be expressed explicitly. Quite often, however, nonparametric approaches are more appropriate due to the presence of unusual curvature in the underlying functions. In sparse data situations, nonparametric methods often fit the data too closely while parametric estimates exhibit problems with bias. We propose a semi-parametric dual modeling approach [Dual Model Robust Regression (DMRR)] for non-replicated data. DMRR combines parametric and nonparametric fits resulting in improved mean and variance estimation. The methodology is illustrated with a data set from the literature as well as via a simulation study.  相似文献   

6.
Application of the balanced repeated replication method of variance estimation can become cumbersome as well as expensive when the number of replicates involved is large. While a number of replication methods of variance estimation requiring a reduced number of replicates have been proposed, the corresponding reduction in computational effort is accompanied by a loss in precision. In this article, this loss in precision is evaluated in the linear case. The results obtained may be useful in practice in balancing precision against computational cost.  相似文献   

7.
The classical chi‐square test of goodness of fit compares the hypothesis that data arise from some parametric family of distributions, against the nonparametric alternative that they arise from some other distribution. However, the chi‐square test requires continuous data to be grouped into arbitrary categories. Furthermore, as the test is based upon an approximation, it can only be used if there are sufficient data. In practice, these requirements are often wasteful of information and overly restrictive. The authors explore the use of the fractional Bayes factor to obtain a Bayesian alternative to the chi‐square test when no specific prior information is available. They consider the extent to which their methodology can handle small data sets and continuous data without arbitrary grouping.  相似文献   

8.
This paper studies the goodness-of-fit test of the residual empirical process of a nearly unstable long-memory time series. Chan and Ling (2008) showed that the usual limit distribution of the Kolmogorov–Smirnov test statistics does not hold for an unstable autoregressive model. A key question of interest is what happens when this model has a near unit root, that is, when it is nearly unstable. In this paper, it is established that the statistics proposed by Chan and Ling can be generalized to encompass nearly unstable long-memory models. In particular, the limit distribution is expressed as a functional of an Ornstein–Uhlenbeck process that is driven by a fractional Brownian motion. Simulation studies demonstrate that the limit distribution of the statistic possesses desirable finite sample properties and power.  相似文献   

9.
The computer construction of optimal or near‐optimal experimental designs is common in practice. Search procedures are often based on the non‐zero eigenvalues of the information matrix of the design. Minimising the average of the pairwise treatment variances can also be used as a search criterion. For equal treatment replication these approaches are equivalent to maximising the harmonic mean of the design's canonical efficiency factors, but differ when treatments are unequally replicated. This paper investigates the extent of these differences and discusses some apparent inconsistencies previously observed when comparing the optimality of equally and unequally replicated designs.  相似文献   

10.
In statistical inference on the drift parameter a in the fractional Brownian motion WHt with the Hurst parameter H ∈ (0, 1) with a constant drift YHt = at + WHt, there is a large number of options how to do it. We may, for example, base this inference on the properties of the standard normal distribution applied to the differences between the observed values of the process at discrete times. Although such methods are very simple, it turns out that more appropriate is to use inverse methods. Such methods can be generalized to non constant drift. For the hypotheses testing about the drift parameter a, it is more proper to standardize the observed process, and to use inverse methods based on the first exit time of the observed process of a pre-specified interval until some given time. These procedures are illustrated, and their times of decision are compared against the direct approach. Other generalizations are possible when the random part is a symmetric stochastic integral of a known, deterministic function with respect to fractional Brownian motion.  相似文献   

11.
ABSTRACT

Replication is complicated in psychological research because studies of a given psychological phenomenon can never be direct or exact replications of one another, and thus effect sizes vary from one study of the phenomenon to the next—an issue of clear importance for replication. Current large-scale replication projects represent an important step forward for assessing replicability, but provide only limited information because they have thus far been designed in a manner such that heterogeneity either cannot be assessed or is intended to be eliminated. Consequently, the nontrivial degree of heterogeneity found in these projects represents a lower bound on the true degree of heterogeneity. We recommend enriching large-scale replication projects going forward by embracing heterogeneity. We argue this is the key for assessing replicability: if effect sizes are sufficiently heterogeneous—even if the sign of the effect is consistent—the phenomenon in question does not seem particularly replicable and the theory underlying it seems poorly constructed and in need of enrichment. Uncovering why and revising theory in light of it will lead to improved theory that explains heterogeneity and increases replicability. Given this, large-scale replication projects can play an important role not only in assessing replicability but also in advancing theory.  相似文献   

12.
We provide an overview of some of the research of the last ten years involving computer network data traffic. We describe the original Ethernet data study which suggested that computer traffic is inherently different from telephone traffic and that in the context of computer networks, self‐similar models such as fractional Brownian motion, should be used. We show that the on–off model can physically explain the presence of self‐similarity. While the on–off model involves bounded signals, it is also possible to consider arbitrary unbounded finite‐variance signals or even infinite‐variance signals whose distributions have heavy tails. We show that, in the latter case, one can still obtain self‐similar processes with dependent increments, but these are not the infinite‐variance fractional stable Lévy motions which have been commonly considered in the literature. The adequate model, in fact, can either have dependent or independent increments, and this depends on the respective size of two parameters, namely, the number of workstations in the network and the time scale under consideration. We indicate what happens when these two parameters become jointly asymptotically large. We conclude with some comments about high frequency behaviour and multifractals.  相似文献   

13.
Supersaturated design is one type of fractional factorial design where the number of columns is greater than the number of rows. This type of design would be useful when costs of experiments are expensive and the number of factors is large, and there is a limitation on the number of runs. This paper presents some theorems on three-level supersaturated design and their application to construction. The construction methods proposed in this paper can be regarded as an extension of the methods developed for two-level supersaturated designs.  相似文献   

14.
Parameter estimation with missing data is a frequently encountered problem in statistics. Imputation is often used to facilitate the parameter estimation by simply applying the complete-sample estimators to the imputed dataset.In this article, we consider the problem of parameter estimation with nonignorable missing data using the approach of parametric fractional imputation proposed by Kim (2011). Using the fractional weights, the E-step of the EM algorithm can be approximated by the weighted mean of the imputed data likelihood where the fractional weights are computed from the current value of the parameter estimates. Calibration fractional imputation is also considered as a way for improving the Monte Carlo approximation in the fractional imputation. Variance estimation is also discussed. Results from two simulation studies are presented to compare the proposed method with the existing methods. A real data example from the Korea Labor and Income Panel Survey (KLIPS) is also presented.  相似文献   

15.
In this article, we consider the problem of testing the hypothesis on mean vectors in multiple-sample problem when the number of observations is smaller than the number of variables. First we propose an independence rule test (IRT) to deal with high-dimensional effects. The asymptotic distributions of IRT under the null hypothesis as well as under the alternative are established when both the dimension and the sample size go to infinity. Next, using the derived asymptotic power of IRT, we propose an adaptive independence rule test (AIRT) that is particularly designed for testing against sparse alternatives. Our AIRT is novel in that it can effectively pick out a few relevant features and reduce the effect of noise accumulation. Real data analysis and Monte Carlo simulations are used to illustrate our proposed methods.  相似文献   

16.
周晓剑等 《统计研究》2014,31(9):102-106
在生存函数的计算中,生命表只提供了其在整数年龄上的值。当计算非整数年龄上的生存函数时就需要进行分数年龄假设。经典的分数年龄假设在数学上容易处理,但却容易导致死力函数不连续,更重要的是无法保证其在分数年龄上估计的精确性。分数年龄假设本质上是一种插值技术。本研究尝试将一种插值性能优越的Kriging模型引入到分数年龄假设中,对整数年龄上的生存函数进行插值,并基于良好拟合的生存函数进一步构建死力函数及平均余命函数。基于Kriging模型的分数年龄假设的有效性通过了Makeham法则下的生存函数的验证,其结果表明,Kriging模型的插值性能远胜过经典的分数年龄假设模型。  相似文献   

17.
Repeating measurements of efficacy variables in clinical trials may be desirable when the measurement may be affected by ambient conditions. When such measurements are repeated at baseline and at the end of therapy, statistical questions relate to: (1) the best summary measurement to use for a subject when there is a possibility that some observations are contaminated and have increased variances; and (2) the effect of screening procedures which exclude outliers based on within- and between-subject contamination tests. We study these issues in two stages, each using a different set of models. The first stage deals only with the choice of the summary measure. The simulation results show that in some cases of contamination, the power achieved by the tests based on the median exceeds that achieved by the tests based on the mean of the replicates. However, even when we use the median, there are cases when contamination leads to a considerable loss in power. The combined issue of the best summary measurement and the effect of screening is studied in the second stage. The tests use either the observed data or the data after screening for outliers. The simulation results demonstrate that the power depends on the screening procedure as well as on the test statistic used in the study. We found that for the extent and magnitude of contamination considered, within-subject screening has a minimal effect on the power of the tests when there are at least three replicates; as a result, we found no advantage in the use of screening procedures for within-subject contamination. On the other hand, the use of a between-subject screening for outliers increases the power of the test procedures. However, even with the use of screening procedures, heterogeneity of variances can greatly reduce the power of the study.  相似文献   

18.
Life tables used in life insurance determine the age of death distribution only at integer ages. Therefore, actuaries make fractional age assumptions to interpolate between integer age values when they have to value payments that are not restricted to integer ages. Traditional fractional age assumptions as well as the fractional independence assumption are easy to apply but result in a non-intuitive overall shape of the force of mortality. Other approaches proposed either require expensive optimization procedures or produce many discontinuities. We suggest a new, computationally inexpensive algorithm to select the parameters within the LFM-family introduced by Jones and Mereu (Insur Math Econ 27:261–276, 2000). In contrast to previously suggested methods, our algorithm enforces a monotone force of mortality between integer ages if the mortality rates are monotone and keeps the number of discontinuities small.  相似文献   

19.
ABSTRACT

To reduce the output variance, the variance-based importance analysis can provide an efficient way by reducing the variance of the ‘important’ inputs. But with the reduction of the variance of those ‘important’ inputs, the input importance will change and it is no longer the most efficient way to reduce the variance of those ‘important’ inputs alone. Thus, analyst needs to consider reducing the variance of other inputs to obtain a more efficient way. This work provides a graphical solution for analyst to decide how to reduce the input variance to achieve the targeted reduction of the output variance efficiently. Furthermore, by the importance sampling-based approach, the graphical solution can be obtained with only a single group of samples, which can decrease the computational cost greatly.  相似文献   

20.
Summary.  When it is impractical to perform the experimental runs of a fractional factorial design in a completely random order, restrictions on the randomization can be imposed. The resulting design is said to have a split-plot, or nested, error structure. Similarly to fractional factorials, fractional factorial split-plot designs can be ranked by using the aberration criterion. Techniques that generate the required designs systematically presuppose unreplicated settings of the whole-plot factors. We use a cheese-making experiment to demonstrate the practical relevance of designs with replicated settings of these factors. We create such designs by splitting the whole plots according to one or more subplot effects. We develop a systematic method to generate the required designs and we use the method to create a table of designs that is likely to be useful in practice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号