首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract

Indirect approaches based on minimal path vectors (d-MPs) and/or minimal cut vectors (d-MCs) are reported to be efficient for the reliability evaluation of multistate networks. Given the need to find more efficient evaluation methods for exact reliability, such techniques may still be cumbersome when the size of the network and the states of component are relatively large. Alternatively, computing reliability bounds can provide approximated reliability with less computational effort. Based on Bai’s exact and indirect reliability evaluation algorithm, an improved algorithm is proposed in this study, which provides sequences of upper and lower reliability bounds of multistate networks. Novel heuristic rules with a pre-specified value to filter less important sets of unspecified states are then developed and incorporated into the algorithm. Computational experiments comparing the proposed methods with an existing direct bounding algorithm show that the new algorithms can provide tight reliability bounds with less computational effort, especially for the proposed algorithm with heuristic L1.  相似文献   

2.
Abstract

The Kruskal–Wallis test is a popular nonparametric test for comparing k independent samples. In this article we propose a new algorithm to compute the exact null distribution of the Kruskal–Wallis test. Generating the exact null distribution of the Kruskal–Wallis test is needed to compare several approximation methods. The 5% cut-off points of the exact null distribution which StatXact cannot produce are obtained as by-products. We also investigate graphically a reason that the exact and approximate distributions differ, and hope that it will be a useful tutorial tool to teach about the Kruskal–Wallis test in undergraduate course.  相似文献   

3.
ABSTRACT

Incremental modelling of data streams is of great practical importance, as shown by its applications in advertising and financial data analysis. We propose two incremental covariance matrix decomposition methods for a compositional data type. The first method, exact incremental covariance decomposition of compositional data (C-EICD), gives an exact decomposition result. The second method, covariance-free incremental covariance decomposition of compositional data (C-CICD), is an approximate algorithm that can efficiently compute high-dimensional cases. Based on these two methods, many frequently used compositional statistical models can be incrementally calculated. We take multiple linear regression and principle component analysis as examples to illustrate the utility of the proposed methods via extensive simulation studies.  相似文献   

4.
ABSTRACT

Calibration, also called inverse regression, is a classical problem which appears often in a regression setup under fixed design. The aim of this article is to propose a stochastic method which gives an estimated solution for a linear calibration problem. We establish exponential inequalities of Bernstein–Frechet type for the probability of the distance between the approximate solutions and the exact one. Furthermore, we build a confidence domain for the so-mentioned exact solution. To check the validity of our results, a numerical example is proposed.  相似文献   

5.
6.
Many situations, especially in Bayesian statistical inference, call for the use of a Markov chain Monte Carlo (MCMC) method as a way to draw approximate samples from an intractable probability distribution. With the use of any MCMC algorithm comes the question of how long the algorithm must run before it can be used to draw an approximate sample from the target distribution. A common method of answering this question involves verifying that the Markov chain satisfies a drift condition and an associated minorization condition (Rosenthal, J Am Stat Assoc 90:558–566, 1995; Jones and Hobert, Stat Sci 16:312–334, 2001). This is often difficult to do analytically, so as an alternative, it is typical to rely on output-based methods of assessing convergence. The work presented here gives a computational method of approximately verifying a drift condition and a minorization condition specifically for the symmetric random-scan Metropolis algorithm. Two examples of the use of the method described in this article are provided, and output-based methods of convergence assessment are presented in each example for comparison with the upper bound on the convergence rate obtained via the simulation-based approach.  相似文献   

7.
In this paper we consider the long-run availability of a parallel system having several independent renewable components with exponentially distributed failure and repair times. We are interested in testing availability of the system or constructing a lower confidence bound for the availability by using component test data. For this problem, there is no exact test or confidence bound available and only approximate methods are available in the literature. Using the generalized p-value approach, an exact test and a generalized confidence interval are given. An example is given to illustrate the proposed procedures. A simulation study is given to demonstrate their advantages over the other available approximate procedures. Based on type I and type II error rates, the simulation study shows that the generalized procedures outperform the other available methods.  相似文献   

8.
ABSTRACT

In applications using a simple regression model with a balanced two-fold nested error structure, interest focuses on inferences concerning the regression coefficient. This article derives exact and approximate confidence intervals on the regression coefficient in the simple regression model with a balanced two-fold nested error structure. Eleven methods are considered for constructing the confidence intervals on the regression coefficient. Computer simulation is performed to compare the proposed confidence intervals. Recommendations are suggested for selecting an appropriate method.  相似文献   

9.
Abstract

We study optimal block designs for comparing a set of test treatments with a control treatment. We provide the class of all E-optimal approximate block designs, which is characterized by simple linear constraints. Based on this characterization, we obtain a class of E-optimal exact designs for unequal block sizes. In the studied model, we provide a statistical interpretation for wide classes of E-optimal designs. Moreover, we show that all approximate A-optimal designs and a large class of A-optimal exact designs for treatment-control comparisons are also R-optimal. This reinforces the observation that A-optimal designs perform well even for rectangular confidence regions.  相似文献   

10.
The efficient design of experiments for comparing a control with v new treatments when the data are dependent is investigated. We concentrate on generalized least-squares estimation for a known covariance structure. We consider block sizes k equal to 3 or 4 and approximate designs. This method may lead to exact optimal designs for some v, b, k, but usually will only indicate the structure of an efficient design for any particular v, b, k, and yield an efficiency bound, usually unattainable. The bound and the structure can then be used to investigate efficient finite designs.  相似文献   

11.
ABSTRACT

This paper introduces an extension of the Markov switching GARCH model where the volatility in each state is a convex combination of two different GARCH components with time varying weights. This model has the dynamic behavior to capture the variants of shocks. The asymptotic behavior of the second moment is investigated and an appropriate upper bound for it is evaluated. Using the Bayesian method via Gibbs sampling algorithm, a dynamic method for the estimation of the parameters is proposed. Finally, we illustrate the efficiency of the model by simulation and also by considering two different set of empirical financial data. We show that this model provides much better forecasts of the volatility than the Markov switching GARCH model.  相似文献   

12.
Abstract

We develop an exact approach for the determination of the minimum sample size for estimating a Poisson parameter such that the pre-specified levels of relative precision and confidence are guaranteed. The exact computation is made possible by reducing infinitely many evaluations of coverage probability to finitely many evaluations. The theory for supporting such a reduction is that the minimum of coverage probability with respect to the parameter in an interval is attained at a discrete set of finitely many elements. Computational mechanisms have been developed to further reduce the computational complexity. An explicit bound for the minimum sample size is established.  相似文献   

13.
The exact inference and prediction intervals for the K-sample exponential scale parameter under doubly Type-II censored samples are derived using an algorithm of Huffer and Lin [Huffer, F.W. and Lin, C.T., 2001, Computing the joint distribution of general linear combinations of spacings or exponen-tial variates. Statistica Sinica, 11, 1141–1157.]. This approach provides a simple way to determine the exact percentage points of the pivotal quantity based on the best linear unbiased estimator in order to develop exact inference for the scale parameter as well as to construct exact prediction intervals for failure times unobserved in the ith sample. Similarly, exact prediction intervals for failure times of units from a future sample can also be easily obtained.  相似文献   

14.

In this paper, we make use of an algorithm of Huffer and Lin (2001) in order to develop exact interval estimation for the location and scale parameters of an exponential distribution based on general progressively Type-II censored samples. The exact prediction intervals for failure times of the items censored at the last observation are also presented for one-parameter and two-parameter exponential distributions. Finally, we give two examples to illustrate the methods of inference developed here.  相似文献   

15.
16.
ABSTRACT

Despite the popularity of the general linear mixed model for data analysis, power and sample size methods and software are not generally available for commonly used test statistics and reference distributions. Statisticians resort to simulations with homegrown and uncertified programs or rough approximations which are misaligned with the data analysis. For a wide range of designs with longitudinal and clustering features, we provide accurate power and sample size approximations for inference about fixed effects in the linear models we call reversible. We show that under widely applicable conditions, the general linear mixed-model Wald test has noncentral distributions equivalent to well-studied multivariate tests. In turn, exact and approximate power and sample size results for the multivariate Hotelling–Lawley test provide exact and approximate power and sample size results for the mixed-model Wald test. The calculations are easily computed with a free, open-source product that requires only a web browser to use. Commercial software can be used for a smaller range of reversible models. Simple approximations allow accounting for modest amounts of missing data. A real-world example illustrates the methods. Sample size results are presented for a multicenter study on pregnancy. The proposed study, an extension of a funded project, has clustering within clinic. Exchangeability among the participants allows averaging across them to remove the clustering structure. The resulting simplified design is a single-level longitudinal study. Multivariate methods for power provide an approximate sample size. All proofs and inputs for the example are in the supplementary materials (available online).  相似文献   

17.
ABSTRACT

The interval estimation problem is investigated for the parameters of a general lower truncated distribution under double Type-II censoring scheme. The exact, asymptotic and bootstrap interval estimates are derived for the unknown model parameter and the lower truncated threshold bound. One real-life example and a numerical study are presented to illustrate performance of our methods.  相似文献   

18.
Abstract

In this paper, we investigate some ruin problems for risk models that contain uncertainties on both claim frequency and claim size distribution. The problems naturally lead to the evaluation of ruin probabilities under the so-called G-expectation framework. We assume that the risk process is described as a class of G-compound Poisson process, a special case of the G-Lévy process. By using the exponential martingale approach, we obtain the upper bounds for the two-sided ruin probability as well as the ruin probability involving investment. Furthermore, we derive the optimal investment strategy under the criterion of minimizing this upper bound. Finally, we conclude that the upper bound in the case with investment is less than or equal to the case without investment.  相似文献   

19.
Rejoinder     
Abstract

In this article several formulae for the approximation of the critical values for tests on the actual values of the process capability indices CPL, CPU, and Cpk are provided. These formulae are based on different approximations of the percentiles of the noncentral t distribution and their performance is evaluated by comparing the values assessed through them from the exact critical values, for several significance levels, test values, and sample sizes. As supported by the obtained results, some of the presented techniques constitute valuable tools in situations where the exact critical values of the tests are not available, since one may approximate them readily and rather accurately through them.  相似文献   

20.
ABSTRACT

We find maximum and minimum extensions of finite 2-subcopulas and discuss the difficulties involved in finding the least upper bound of extensions in higher dimensions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号