首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到14条相似文献,搜索用时 7 毫秒
1.
ABSTRACT

Consider k(≥ 2) independent exponential populations Π1, Π2, …, Π k , having the common unknown location parameter μ ∈ (?∞, ∞) (also called the guarantee time) and unknown scale parameters σ1, σ2, …σ k , respectively (also called the remaining mean lifetimes after the completion of guarantee times), σ i  > 0, i = 1, 2, …, k. Assume that the correct ordering between σ1, σ2, …, σ k is not known apriori and let σ[i], i = 1, 2, …, k, denote the ith smallest of σ j s, so that σ[1] ≤ σ[2] ··· ≤ σ[k]. Then Θ i  = μ + σ i is the mean lifetime of Π i , i = 1, 2, …, k. Let Θ[1] ≤ Θ[2] ··· ≤ Θ[k] denote the ranked values of the Θ j s, so that Θ[i] = μ + σ[i], i = 1, 2, …, k, and let Π(i) denote the unknown population associated with the ith smallest mean lifetime Θ[i] = μ + σ[i], i = 1, 2, …, k. Based on independent random samples from the k populations, we propose a selection procedure for the goal of selecting the population having the longest mean lifetime Θ[k] (called the “best” population), under the subset selection formulation. Tables for the implementation of the proposed selection procedure are provided. It is established that the proposed subset selection procedure is monotone for a general k (≥ 2). For k = 2, we consider the loss measured by the size of the selected subset and establish that the proposed subset selection procedure is minimax among selection procedures that satisfy a certain probability requirement (called the P*-condition) for the inclusion of the best population in the selected subset.  相似文献   

2.
Suppose exponential populations π i with parameters (μi , σi ) (i = 1,2,…,k) are given. This article discusses how to select “good” populations in the sense of [Lam (1986 Lam, K. 1986. A new procedure for selecting good populations. Biometrika, 73(1): 201206. [Crossref], [Web of Science ®] [Google Scholar]). A new procedure for selecting good populations. Biometrika 73(1):201–206]. Depending on whether the σ i 's are known or unknown, several one-stage and a two-stage procedure of selection are proposed. The two-stage procedure can be replaced by a one-stage procedure if the second-stage sample is proved intangible. An attracting feature of these procedures is that they need no new statistical tables to implement.  相似文献   

3.
An increasing number of contemporary datasets are high dimensional. Applications require these datasets be screened (or filtered) to select a subset for further study. Multiple testing is the standard tool in such applications, although alternatives have begun to be explored. In order to assess the quality of selection in these high-dimensional contexts, Cui and Wilson (2008b Cui , X. , Wilson , J. ( 2008b ). On the probability of correct selection for large k populations with application to microarray data . Biometrical Journal 50 ( 5 ): 870883 .[Crossref], [PubMed], [Web of Science ®] [Google Scholar]) proposed two viable methods of calculating the probability that any such selection is correct (PCS). PCS thereby serves as a measure of the quality of competing statistics used for selection. The first simulation study of this article investigates the two PCS statistics of the above article. It shows that in the high-dimensional case PCS can be accurately estimated and is robust under certain conditions. The second simulation study investigates a nonparametric estimator of PCS.  相似文献   

4.
We propose a new procedure for the multinomial selection problem to solve a real problem of any modern Air Force: the elaboration of better air-to-air tactics for Beyond Visual Range air-to-air combat that maximize its aircraft survival probability H(θ, ω), as well as enemy aircraft downing probability G(θ, ω). In this study, using a low-resolution simulator with generic parameters for the aircraft and missiles, we could increase an average success rate of 16.69% and 16.23% for H(θ, ω) and G(θ, ω), respectively, to an average success rate of 76.85% and 79.30%. We can assure with low probability of being wrong that the selected tactic has greater probability of yielding greater success rates in both H(θ, ω) and G(θ, ω) than any simulated tactic.  相似文献   

5.
Ranking and selection theory is used to estimate the number of signals present in colored noise. The data structure follows the well-known MUSIC (MUltiple SIgnal Classification) model. We deal with the eigenvalues of a covariance matrix, using the MUSIC model and colored noise. The data matrix can be written as the product of two matrices. The first matrix is the sample covariance matrix of the observed vectors. The second matrix is the inverse of the sample covariance matrix of reference vectors. We propose a multi-step selection procedure to construct a confidence interval on the number of signals present in a data set. Properties of this procedure will be stated and proved. Those properties will be used to compute the required parameters (procedure constants). Numerical examples are given to illustrate our theory.  相似文献   

6.
The problem of optimal non-sequential allocation of observations for the selection of the better binomial population is considered in the case of fixed sampling costs and budget. With the appropriate choice of selection rule it is shown that a 70% reduction in the probability of incorrect selection is possible by using an unequal rather than equal allocation. Simple formulae are given for the appropriate selection rule and unequal allocation in large samples.  相似文献   

7.
Suppose exponential populations πi with parameters (μii) (i = 1, 2, …, K) are given. The σi can be unknown and unequal. This article discusses how to select the k (≥1) best populations. Under the subset selection formulation, a one-stage procedure is proposed. Under the indifference zone formulation, a two-stage procedure is proposed. An appealing feature of these procedures is that no statistical tables are needed for their implementation.  相似文献   

8.
Abstract

In survival or reliability data analysis, it is often useful to estimate the quantiles of the lifetime distribution, such as the median time to failure. Different nonparametric methods can construct confidence intervals for the quantiles of the lifetime distributions, some of which are implemented in commonly used statistical software packages. We here investigate the performance of different interval estimation procedures under a variety of settings with different censoring schemes. Our main objectives in this paper are to (i) evaluate the performance of confidence intervals based on the transformation approach commonly used in statistical software, (ii) introduce a new density-estimation-based approach to obtain confidence intervals for survival quantiles, and (iii) compare it with the transformation approach. We provide a comprehensive comparative study and offer some useful practical recommendations based on our results. Some numerical examples are presented to illustrate the methodologies developed.  相似文献   

9.
Consider k( ? 2) normal populations with unknown means μ1, …, μk, and a common known variance σ2. Let μ[1] ? ??? ? μ[k] denote the ordered μi.The populations associated with the t(1 ? t ? k ? 1) largest means are called the t best populations. Hsu and Panchapakesan (2004) proposed and investigated a procedure RHPfor selecting a non empty subset of the k populations whose size is at most m(1 ? m ? k ? t) so that at least one of the t best populations is included in the selected subset with a minimum guaranteed probability P* whenever μ[k ? t + 1] ? μ[k ? t] ? δ*, where P*?and?δ* are specified in advance of the experiment. This probability requirement is known as the indifference-zone probability requirement. In the present article, we investigate the same procedure RHP for the same goal as before but when k ? t < m ? k ? 1 so that at least one of the t best populations is included in the selected subset with a minimum guaranteed probability P* whatever be the configuration of the unknown μi. The probability requirement in this latter case is termed the subset selection probability requirement. Santner (1976) proposed and investigated a different procedure (RS) based on samples of size n from each of the populations, considering both cases, 1 ? m ? k ? t and k ? t < m ? k. The special case of t = 1 was earlier studied by Gupta and Santner (1973) and Hsu and Panchapakesan (2002) for their respective procedures.  相似文献   

10.
A useful parameterization of the exponential failure model with imperfect signalling, under random censoring scheme, is considered to accommodate covariates. Simple sufficient conditions for the existence, uniqueness, consistency, and asymptotic normality of maximum likelihood estimators for the parameters in these models are given. The results are then applied to derive the asymptotic properties of the likelihood ratio test for a difference between failure signalling proportions between groups in a ‘one-way’ classification.  相似文献   

11.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

12.
A considerable problem in statistics and risk management is finding distributions that capture the complex behaviour exhibited by financial data. The importance of higher order moments in decision making has been well recognized and there is increasing interest in modelling with distributions that are able to account for these effects. The Pearson system can be used to model a wide scale of distributions with various skewness and kurtosis. This paper provides computational examples of a new easily implemented method for selecting probability density functions from the Pearson family of distributions. We apply this method to daily, monthly, and annual series using a range of data from commodity markets to macroeconomic variables.  相似文献   

13.
This paper offers a predictive approach for the selection of a fixed number (= t) of treatments from k treatments with the goal of controlling for predictive losses. For the ith treatment, independent observations X ij (j = 1,2,…,n) can be observed where X ij ’s are normally distributed N(θ i ; σ 2). The ranked values of θ i ’s and X i ’s are θ (1) ≤ … ≤ θ (k) and X [1] ≤ … ≤ X [k] and the selected subset S = {[k], [k? 1], … , [k ? t+1]} will be considered. This paper distinguishes between two types of loss functions. A type I loss function associated with a selected subset S is the loss in utility from the selector’s view point and is a function of θ i with i ? S. A type II loss function associated with S measures the unfairness in the selection from candidates’ viewpoint and is a function of θ i with i ? S. This paper shows that under mild assumptions on the loss functions S is optimal and provides the necessary formulae for choosing n so that the two types of loss can be controlled individually or simultaneously with a high probability. Predictive bounds for the losses are provided, Numerical examples support the usefulness of the predictive approach over the design of experiment approach.  相似文献   

14.
Abstract

This paper proposes a new mathematical model for the reliability-redundancy allocation problem (RRAP) with a choice of redundancy strategies. To maximize the reliability of a system, this model chooses the best redundancy strategy from among both active and standby ones for each subsystem. For those with a standby strategy, a continuous time Markov chain model is used to calculate the exact reliability values. In order to solve the proposed mixed-integer non-linear programing model, a powerful evolutionary algorithm, called water cycle algorithm (WCA), is developed and implemented on three famous benchmark problems. Finally, the results of different benchmark problems are compared with those previously reported to show the superiority of the proposed model and the efficiency of WCA.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号