首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the design of constant-stress life-testing experiments, the optimal allocation in a multi-level stress test with Type-I or Type-II censoring based on the Weibull regression model has been studied in the literature. Conventional Type-I and Type-II censoring schemes restrict our ability to observe extreme failures in the experiment and these extreme failures are important in the estimation of upper quantiles and understanding of the tail behaviors of the lifetime distribution. For this reason, we propose the use of progressive extremal censoring at each stress level, whereas the conventional Type-II censoring is a special case. The proposed experimental scheme allows some extreme failures to be observed. The maximum likelihood estimators of the model parameters, the Fisher information, and asymptotic variance–covariance matrices of the maximum likelihood estimates are derived. We consider the optimal experimental planning problem by looking at four different optimality criteria. To avoid the computational burden in searching for the optimal allocation, a simple search procedure is suggested. Optimal allocation of units for two- and four-stress-level situations is determined numerically. The asymptotic Fisher information matrix and the asymptotic optimal allocation problem are also studied and the results are compared with optimal allocations with specified sample sizes. Finally, conclusions and some practical recommendations are provided.  相似文献   

2.
In this article, we systematically study the optimal truncated group sequential test on binomial proportions. Through analysis of the cost structure, average test cost is introduced as a new optimality criterion. According to the new criterion, the optimal tests on different design parameters including the boundaries, success discriminant value, stage sample vector, stage size, and the maximum sample size are defined. Since the computation time in finding optimal designs by exhaustive search is intolerably long, group sequential sample space sorting method and procedures are developed to find the near-optimal ones. In comparison with the international standard ISO2859-1, the truncated group sequential designs proposed in this article can reduce the average test costs around 20%.  相似文献   

3.
Cut-off sampling has been widely used for business survey which has the right-skewed population with a long tail. Several methods are suggested to obtain the optimal cut-off point. The LH algorithm suggested by Lavallee and Hidiroglou [6] is commonly used to get the optimum boundaries by minimizing the total sample size with a given precision. In this paper, we suggest a new cut-off point determination method which minimizes a cost function. And that leads to reducing the size of take-all stratum. Also we investigate an optimal cut-off point using a typical parametric estimation method under the assumptions of underlying distributions. Small Monte-Carlo simulation studies are performed in order to compare the new cut-off point method to the LH algorithm. The Korea Transportation Origin – Destination data are used for real data analysis.  相似文献   

4.
Group testing problems are considered as examples of discrete search problems. Existence theorems for optimal nonsequential designs developed for the general discrete search problems in O'Geran et al. (Acta Appl. Math. 25 (1991) 241–276) are applied for construction of upper bounds for the length of optimal group testing strategies in the case of additive model. The key point in the study is derivation of analytic expressions for the so-called Renyi coefficients. In addition, some asymptotic results are obtained and an asymptotic design problem is considered. The results particularly imply that if the number of significant factors is relatively small compared to the total number of factors then the choice of the test collections all containing a half of the total number of factors is asymptotically optimal in a proper sense.  相似文献   

5.
This paper considers the likelihood ratio (LR) tests of stationarity, common trends and cointegration for multivariate time series. As the distribution of these tests is not known, a bootstrap version is proposed via a state- space representation. The bootstrap samples are obtained from the Kalman filter innovations under the null hypothesis. Monte Carlo simulations for the Gaussian univariate random walk plus noise model show that the bootstrap LR test achieves higher power for medium-sized deviations from the null hypothesis than a locally optimal and one-sided Lagrange Multiplier (LM) test that has a known asymptotic distribution. The power gains of the bootstrap LR test are significantly larger for testing the hypothesis of common trends and cointegration in multivariate time series, as the alternative asymptotic procedure – obtained as an extension of the LM test of stationarity – does not possess properties of optimality. Finally, it is shown that the (pseudo-)LR tests maintain good size and power properties also for the non-Gaussian series. An empirical illustration is provided.  相似文献   

6.
This article discusses the discretization of continuous-time filters for application to discrete time series sampled at any fixed frequency. In this approach, the filter is first set up directly in continuous-time; since the filter is expressed over a continuous range of lags, we also refer to them as continuous-lag filters. The second step is to discretize the filter itself. This approach applies to different problems in signal extraction, including trend or business cycle analysis, and the method allows for coherent design of discrete filters for observed data sampled as a stock or a flow, for nonstationary data with stochastic trend, and for different sampling frequencies. We derive explicit formulas for the mean squared error (MSE) optimal discretization filters. We also discuss the problem of optimal interpolation for nonstationary processes – namely, how to estimate the values of a process and its components at arbitrary times in-between the sampling times. A number of illustrations of discrete filter coefficient calculations are provided, including the local level model (LLM) trend filter, the smooth trend model (STM) trend filter, and the Band Pass (BP) filter. The essential methodology can be applied to other kinds of trend extraction problems. Finally, we provide an extended demonstration of the method on CPI flow data measured at monthly and annual sampling frequencies.  相似文献   

7.
Summary.  A new test is proposed comparing two multivariate distributions by using distances between observations. Unlike earlier tests using interpoint distances, the new test statistic has a known exact distribution and is exactly distribution free. The interpoint distances are used to construct an optimal non-bipartite matching, i.e. a matching of the observations into disjoint pairs to minimize the total distance within pairs. The cross-match statistic is the number of pairs containing one observation from the first distribution and one from the second. Distributions that are very different will exhibit few cross-matches. When comparing two discrete distributions with finite support, the test is consistent against all alternatives. The test is applied to a study of brain activation measured by functional magnetic resonance imaging during two linguistic tasks, comparing brains that are impaired by arteriovenous abnormalities with normal controls. A second exact distribution-free test is also discussed: it ranks the pairs and sums the ranks of the cross-matched pairs.  相似文献   

8.
In recent years, a large number of new discrete distributions have appeared in the literature. However, flexible discrete models which, at the same time, allow for easy statistical inference, are still an exception. This paper makes a detailed analysis of a family of discrete failure time distributions which meets both requirements. It examines the maximum likelihood estimation of the unknown parameters and presents a goodness-of-fit test for this model. The test is used for the selection of an appropriate model for datasets of frequencies of the duration of atmospheric circulation patterns.  相似文献   

9.
Circular data – data whose values lie in the interval [0,2π) – are important in a number of application areas. In some, there is a suspicion that a sequence of circular readings may contain two or more segments following different models. An analysis may then seek to decide whether there are multiple segments, and if so, to estimate the changepoints separating them. This paper presents an optimal method for segmenting sequences of data following the von Mises distribution. It is shown by example that the method is also successful in data following a distribution with much heavier tails.  相似文献   

10.
The comparative powers of six discrete goodness-of-fit test statistics for a uniform null distribution against a variety of fully specified alternative distributions are discussed. The results suggest that the test statistics based on the empirical distribution function for ordinal data (Kolmogorov–Smirnov, Cramér–von Mises, and Anderson–Darling) are generally more powerful for trend alternative distributions. The test statistics for nominal (Pearson's chi-square and the nominal Kolmogorov–Smirnov) and circular data (Watson's test statistic) are shown to be generally more powerful for the investigated triangular (∨), flat (or platykurtic type), sharp (or leptokurtic type), and bimodal alternative distributions.  相似文献   

11.
There are two principal issues in statistical planning. One is the accuracy/reliability of statistical inference and the other is the length of test time needed to complete the designed experiment. With regard to the latter, various test schemes have been proposed and applied in statistical literature. These schemes, among others, include type-I censoring, the usual type-II censoring, and progressively type-II censoring. To implement any of these experiments it is necessary that the capacity of the test facility is large enough so that all the items can be tested simultaneously. If, however, instead of having one facility with large capacity there are several facilities with relatively smaller capacities, a differently designed experiment would be necessary. This paper studies and compares elapsed test times and total elapsed test times corresponding to different statistical plans. The results obtained here are useful for performing an experiment that has shorter test time in a certain sense.  相似文献   

12.
Some information gets lost when numerical scores evaluating performances are converted into letter grades. We propose to measure this information loss through the proportion of variance lost due to grouping. We study various properties of this measure, including its invariance in location and scale equivariant families. The information loss typically decreases with an increase in the number of levels of letter grades. However, it is not appropriate to have too many levels. The optimum number of levels may be determined, either by visual inspection when the information loss becomes marginal/stable, or by minimizing the sum of the information loss and a penalty term, the latter being taken as linear in the number of levels. We also address the problem of determining the groups, or equivalently, the boundaries so that the information loss is minimized, given a fixed number of groups. Finding these optimal boundaries is a computationally intensive exercise even for moderate size data, unless the number of groups is very small. We recommend an alternative way by fitting an appropriate probability distribution. When the probabilistic nature of the data is known, the boundary points turn out to be the solutions to a system of equations; however these solutions may not necessarily have any closed form. We derive the exact or approximate solutions of these equations when the composite scores follow a probability distribution belonging to the Uniform, Triangular, and Gaussian family.  相似文献   

13.
A K -sample testing problem is studied for multivariate counting processes with time-dependent frailty. Asymptotic distributions and efficiency of a class of non-parametric test statistics are established for certain local alternatives. The concept of efficiency is to show that for every non-parametric test in this class, there is a parametric submodel for which the optimal test has the same asymptotic power as the non-parametric one. The theory is applied to analyse a diabetic retinopathy study data set. A simulation study is also presented to illustrate the theory  相似文献   

14.
15.
Finite mixtures of distributions have been getting increasing use in the applied literature. In the continuous case, linear combinations of exponentials and gammas have been shown to be well suited for modeling purposes. In the discrete case, the focus has primarily been on continuous mixing, usually of Poisson distributions and typically using gammas to describe the random parameter, But many of these applications are forced, especially when a continuous mixing distribution is used. Instead, it is often prefe-rable to try finite mixtures of geometries or negative binomials, since these are the fundamental building blocks of all discrete random variables. To date, a major stumbling block to their use has been the lack of easy routines for estimating the parameters of such models. This problem has now been alleviated by the adaptation to the discrete case of numerical procedures recently developed for exponential, Weibull, and gamma mixtures. The new methods have been applied to four previously studied data sets, and significant improvements reported in goodness-of-fit, with resultant implications for each affected study.  相似文献   

16.
In this article, a new discrete distribution related to the generalized gamma distribution (Stacy, 1962) is derived from a statistical mechanical setup. This new distribution can be seen as generalization of two-parameter discrete gamma distribution (Chakraborty and Chakravarty, 2012) and encompasses discrete version of many important continuous distributions. Some basic distributional and reliability properties, parameter estimation by different methods, and their comparative performances using simulation are investigated. Two-real life data sets are considered for data modeling and likelihood ratio test for illustrating the advantages of the proposed distribution over two-parameter discrete gamma distribution.  相似文献   

17.
In this paper, we extend Bernstein theorem by using basic tools of calculus on time scales, and, as a further application of it, the discrete nabla and delta Mittag-Leffler distributions are introduced here with respect to their Laplace transforms on the discrete time scale. For these discrete distributions, infinite divisibility and geometric infinite divisibility are proved along with some statistical properties. The delta and nabla Mittag-Leffler processes are defined.  相似文献   

18.
Standard Methods of optimal stratification are solving the optimization problem as a function of strata boundaries and sample allocation only. In this paper we show that by means of a flexible two stage grid search procedure strata boundaries, sample allocation and furthermore number of strata can be taken into account in an effective way when optimizing stratification and allocation. By means of a Monte Carlo simulation we show that the proposed procedure is efficient compared to the well known standard procedures.  相似文献   

19.
In this article, the discrete analog of Weibull geometric distribution is introduced. Discrete Weibull, discrete Rayleigh, and geometric distributions are submodels of this distribution. Some basic distributional properties, hazard function, random number generation, moments, and order statistics of this new discrete distribution are studied. Estimation of the parameters are done using maximum likelihood method. The applications of the distribution is established using two datasets.  相似文献   

20.
Summary.  To help to design vaccines for acquired immune deficiency syndrome that protect broadly against many genetic variants of the human immunodeficiency virus, the mutation rates at 118 positions in HIV amino-acid sequences of subtype C versus those of subtype B were compared. The false discovery rate (FDR) multiple-comparisons procedure can be used to determine statistical significance. When the test statistics have discrete distributions, the FDR procedure can be made more powerful by a simple modification. The paper develops a modified FDR procedure for discrete data and applies it to the human immunodeficiency virus data. The new procedure detects 15 positions with significantly different mutation rates compared with 11 that are detected by the original FDR method. Simulations delineate conditions under which the modified FDR procedure confers large gains in power over the original technique. In general FDR adjustment methods can be improved for discrete data by incorporating the modification proposed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号