首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 890 毫秒
1.
A recursive same-sign relation is derived that reduces the probability of occurrence of at least m out of N independent events to the probability of occurrence of at least m out of N ? 1 of these N events.  相似文献   

2.
3.
A consecutive k-within-m-out-of-n:F system consists of n linearly ordered components and fails if and only if there are m consecutive components which include among them at least k failed components. This system model generalizes both consecutive k-out-of-n:F and k-out-of-n:F systems. In this article, we study the dynamic reliability properties of consecutive k-within-m-out-of-n:F system consisting of exchangeable dependent components. We also obtain some stochastic ordering results and use them to get simple approximation formulae for the survival function and mean time to failure of this system.  相似文献   

4.
A system can be classified with respect to the physical arrangement of its components and the functioning principle. A circular consecutive k-within-m-out-of-n:F system consists of n circularly ordered components and fails if and only if there are m consecutive components that include among them at least k failed components. A circular consecutive k-within-m-out-of-n:F system turns into circular consecutive k-out-of-n:F for m = k and k-out-of-n:F system for m = n. In this study, signature-based analysis of circular consecutive k-within-m-out-of-n:F system is performed. A new approximation to this system is provided based on maximum number of failed components and an illustrative example is given for different values of n, m, k to compare the approximate results with simulated and exact results.  相似文献   

5.
Consider a population of n individuals that move independently among a finite set {1, 2,……, k} of states in a sequence of trials. t = 0. 1, 2,…, m. each according to a Markov chain with transition probability matrix P . This paper deals with the problem of estimating P on the basis of aggregate data which record only the numbers of individuals that occupy each of the k states at times t = 0. 1,2,……,m. Estimation is accomplished using conditional least squares, and asymptotic results are verified for the case n → ∞. A weighted least-squares estimator is introduced and compared with previous estimators. Some comments are made on estimability questions that arise when only aggregate data are available.  相似文献   

6.
Making predictions of future realized values of random variables based on currently available data is a frequent task in statistical applications. In some applications, the interest is to obtain a two-sided simultaneous prediction interval (SPI) to contain at least k out of m future observations with a certain confidence level based on n previous observations from the same distribution. A closely related problem is to obtain a one-sided upper (or lower) simultaneous prediction bound (SPB) to exceed (or be exceeded) by at least k out of m future observations. In this paper, we provide a general approach for computing SPIs and SPBs based on data from a particular member of the (log)-location-scale family of distributions with complete or right censored data. The proposed simulation-based procedure can provide exact coverage probability for complete and Type II censored data. For Type I censored data, our simulation results show that our procedure provides satisfactory results in small samples. We use three applications to illustrate the proposed simultaneous prediction intervals and bounds.  相似文献   

7.
Consider k( ? 2) normal populations with unknown means μ1, …, μk, and a common known variance σ2. Let μ[1] ? ??? ? μ[k] denote the ordered μi.The populations associated with the t(1 ? t ? k ? 1) largest means are called the t best populations. Hsu and Panchapakesan (2004) proposed and investigated a procedure RHPfor selecting a non empty subset of the k populations whose size is at most m(1 ? m ? k ? t) so that at least one of the t best populations is included in the selected subset with a minimum guaranteed probability P* whenever μ[k ? t + 1] ? μ[k ? t] ? δ*, where P*?and?δ* are specified in advance of the experiment. This probability requirement is known as the indifference-zone probability requirement. In the present article, we investigate the same procedure RHP for the same goal as before but when k ? t < m ? k ? 1 so that at least one of the t best populations is included in the selected subset with a minimum guaranteed probability P* whatever be the configuration of the unknown μi. The probability requirement in this latter case is termed the subset selection probability requirement. Santner (1976) proposed and investigated a different procedure (RS) based on samples of size n from each of the populations, considering both cases, 1 ? m ? k ? t and k ? t < m ? k. The special case of t = 1 was earlier studied by Gupta and Santner (1973) and Hsu and Panchapakesan (2002) for their respective procedures.  相似文献   

8.
In this article, it is explicitly demonstrated that the probability of non exceedance of the mth value in n order ranked events equals m/(n + 1). Consequently, the plotting position in the extreme value analysis should be considered not as an estimate, but to be equal to m/(n + 1), regardless of the parent distribution and the application. The many other suggested plotting formulas and numerical methods to determine them should thus be abandoned. The article is intended to mark the end of the century-long controversial discussion on the plotting positions.  相似文献   

9.
Zijian Wang  Yi Wu  Mengge Wang 《Statistics》2019,53(2):261-282
In this paper, the complete convergence and complete moment convergence for arrays of rowwise m-extended negatively dependent (m-END, for short) random variables are established. As an application, the Marcinkiewicz-Zygmund type strong law of large numbers for m-END random variables is also achieved. By using the results that we established, we further investigate the strong consistency of the least square estimator in the simple linear errors-in-variables models, and provide some simulations to verify the validity of our theoretical results.  相似文献   

10.
Since structural changes in a possibly transformed financial time series may contain important information for investors and analysts, we consider the following problem of sequential econometrics. For a given time series we aim at detecting the first change-point where a jump of size a occurs, i.e., the mean changes from, say, m 0to m 0+ a and returns to m 0after a possibly short period s. To address this problem, we study a Shewhart-type control chart based on a sequential version of the sigma filter, which extends kernel smoothers by employing stochastic weights depending on the process history to detect jumps in the data more accurately than classical approaches. We study both theoretical properties and performance issues. Concerning the statistical properties, it is important to know whether the normed delay time of the considered control chart is bounded, at least asymptotically. Extending known results for linear statistics employing deterministic weighting schemes, we establish an upper bound which holds if the memory of the chart tends to infinity. The performance of the proposed control charts is studied by simulations. We confine ourselves to some special models which try to mimic important features of real time series. Our empirical results provide some evidence that jump-preserving weights are preferable under certain circumstances.  相似文献   

11.
Judges rank k out of t objects according to m replic ations of abasic balanced incomplete block design with bblocks. In Alvo and Cabilio(1991),it is shown that the Durbin test, which is the usual test in this situation, can be written in terms of Spearman correlations between the blocks, and using a Kendall correlation, they generated a new statistic for this situation.This Kendall tau based statistic has a richer support than the Durbin statistic, and is at least as efficient.In the present paper,exact and simulation based tables are generated for both statistics, and various approximations to these null distributions are considered and compared.  相似文献   

12.
We consider a secretary type problem where an administrator who has only one on-line choice in m consecutive searches has to choose the best candidate in one of them.  相似文献   

13.
One of the two independent stochastic processes (or ‘arms’) is selected and observed sequentially at each of n(≤ ∝) stages. Arm 1 yields observations identically distributed with unknown probability measure P with a Dirichlet process prior whereas observations from arm 2 have known probability measure Q. Future observations are discounted and at stage m, the payoff is a m(≥0) times the observation Z m at that stage. The objective is to maximize the total expected payoff. Clayton and Berry (1985) consider this problem when a m equals 1 for mn and 0 for m > n(< ∝) In this paper, the Clayton and Berry (1985) results are extended to the case of regular discount sequences of horizon n, which may also be infinite. The results are illustrated with numerical examples. In case of geometric discounting, the results apply to a bandit with many independent unknown Dirichlet arms.  相似文献   

14.
Abstract

Let X 1, …, X m and Y 1, …, Y n be independent random variables, where X 1, …, X m are i.i.d. with continuous distribution function (df) F, and Y 1, …, Y n are i.i.d. with continuous df G. For testing the hypothesis H 0: F = G, we introduce and study analogues of the celebrated Kolmogorov–Smirnov and one- and two-sided Cramér-von Mises statistics that are functionals of a suitably integrated two-sample empirical process. Furthermore, we characterize those distributions for which the new tests are locally Bahadur optimal within the setting of shift alternatives.  相似文献   

15.
Multivariate combination-based permutation tests have been widely used in many complex problems. In this paper we focus on the equipower property, derived directly from the finite-sample consistency property, and we analyze the impact of the dependency structure on the combined tests. At first, we consider the finite-sample consistency property which assumes that sample sizes are fixed (and possibly small) and considers on each subject a large number of informative variables. Moreover, since permutation test statistics do not require to be standardized, we need not assume that data are homoscedastic in the alternative. The equipower property is then derived from these two notions: consider the unconditional permutation power of a test statistic T for fixed sample sizes, with V ? 2 independent and identically distributed variables and fixed effect δ, calculated in two ways: (i) by considering two V-dimensional samples sized m1 and m2, respectively; (ii) by considering two unidimensional samples sized n1 = Vm1 and n2 = Vm2, respectively. Since the unconditional power essentially depends on the non centrality induced by T, and two ways are provided with exactly the same likelihood and the same non centrality, we show that they are provided with the same power function, at least approximately. As regards both investigating the equipower property and the power behavior in presence of correlation we performed an extensive simulation study.  相似文献   

16.
A sequence of independent lifetimes X 1, X 2,…, X m , X m+1,… X n were observed from geometric population with parameter q 1 but later it was found that there was a change in the system at some point of time m and it is reflected in the sequence after X m by change in parameter q 2. The Bayes estimates of m, q 1, q 2, reliability R 1 (t) and R 2 (t) at time t are derived for symmetric and asymmetric loss functions under informative and non informative priors. A simulation study is carried out.  相似文献   

17.
The traditional non-parametric bootstrap (referred to as the n-out-of-n bootstrap) is a widely applicable and powerful tool for statistical inference, but in important situations it can fail. It is well known that by using a bootstrap sample of size m, different from n, the resulting m-out-of-n bootstrap provides a method for rectifying the traditional bootstrap inconsistency. Moreover, recent studies have shown that interesting cases exist where it is better to use the m-out-of-n bootstrap in spite of the fact that the n-out-of-n bootstrap works. In this paper, we discuss another case by considering its application to hypothesis testing. Two new data-based choices of m are proposed in this set-up. The results of simulation studies are presented to provide empirical comparisons between the performance of the traditional bootstrap and the m-out-of-n bootstrap, based on the two data-dependent choices of m, as well as on an existing method in the literature for choosing m. These results show that the m-out-of-n bootstrap, based on our choice of m, generally outperforms the traditional bootstrap procedure as well as the procedure based on the choice of m proposed in the literature.  相似文献   

18.
The authors propose nonparametric tests for the hypothesis of no direct treatment effects, as well as for the hypothesis of no carryover effects, for balanced crossover designs in which the number of treatments equals the number of periods p, where p ≥ 3. They suppose that the design consists of n replications of balanced crossover designs, each formed by m Latin squares of order p. Their tests are permutation tests which are based on the n vectors of least squares estimators of the parameters of interest obtained from the n replications of the experiment. They obtain both the exact and limiting distribution of the test statistics, and they show that the tests have, asymptotically, the same power as the F‐ratio test.  相似文献   

19.
A sequence of independent lifetimes X 1,…, X m , X m+1,…, X n were observed from inverse Weibull distribution with mean stress θ1 and reliability R 1(t 0) at time t 0 but later it was found that there was a change in the system at some point of time m and it is reflected in the sequence after X m by change in mean stress θ1 and in reliability R 2(t 0) at time t 0. The Bayes estimators of m, R 1(t 0) and R 2(t 0) are derived when a poor and a more detailed prior information is introduced into the inferential procedure. The effects of correct and wrong prior information on the Bayes estimators are studied.  相似文献   

20.
A sequence of independent lifetimes X 1, X 2,…, X m , X m+1,…, X n were observed from the mixture of a degenerate and left-truncated exponential (LTE) distribution, with reliability R at time τ and minimum life length η with unknown proportion p 1 and θ1 but later it was found that there was a change in the system at some point of time m and it is reflected in the sequence after X m by change in reliability R at time τ and unknown proportion p 2 and θ2. This distribution occurs in many practical situations, for instance; life of a unit may have a LTE distribution but some of the units fail instantaneously. Apart from mixture distributions, the phenomenon of change point is also observed in several situations in life testing and reliability estimation problems. It may happen that at some point of time instability in the sequence of failure times is observed. The problem of study is: When and where this change has started occurring. This is called change point inference problem. The estimators of m, R 1(t 0), R 2(t 0), p 1, and p 2 are derived under asymmetric loss functions namely Linex loss & general entropy loss functions. Both the non informative and informative prior are considered. The effects of prior consideration on Bayes estimates of change point are also studied.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号