首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The problem of selecting the Bernoulli population which has the highest "success" probability is considered. It has been noted in several articles that the probability of a correct selection is the same, uniformly in the Bernoulli p-vector (P1,P2,….,Pk), for two or more different selection procedures. We give a general theorem which explains this phenomenon.

An application of particular interest arises when "strong" curtailment of a single-stage procedure (as introduced by Bechhofer and Kulkarni (1982a) )is employed; the corresponding result for "weak" curtailment of a single-stage procedure needs no proof. The use of strong curtailment in place of weak curtailment requires no more (and usually many less) observations to achieve the same.  相似文献   

2.
This paper proposes the use of tne likelihood ratio statistic in choosing between a Neibull or gamma model, values of the probability of correct seeiection are obtained by Monte Carlo simulation. This method provides some basis for decision even when the sample size is small. The technique is applied to four sets of data from the literature.  相似文献   

3.
In the problem of selecting the best of k populations, Olkin, Sobel, and Tong (1976) have introduced the idea of estimating the probability of correct selection. In an attempt to improve on their estimator we consider anempirical Bayes approach. We compare the two estimators via analytic results and a simulation study.  相似文献   

4.
For ranking and selection problems, the true probabiIity of a correct selection P(CS) is unknown even if a selection is made under the indifference-zone approach. Thus to estimate the true P(CS) some Bayes estimators and a bootstrap estimator are proposed for two normcal populations with common known variance. Also a bootstrap estimator and a bootstrap confidence interval are proposed for normal populations with common unknown variance. Some comparisons between proposed estimators and some other known estimators are made via Monte Carlo simulations.  相似文献   

5.
A sequential procedure for a selection of the better of two trinomial populations has been proposed by ?idók (1988). The present paper shows some Monte Carlo results for 4 different strategies of sequential experimentation in this procedure, on this basis compares the strategies, and gives some practical recommendations for choosing the strategy.  相似文献   

6.
In this article, lower bounds for expected sample size of sequential selection procedures are constructed for the problem of selecting the most probable event of k-variate multinomial distribution. The study is based on Volodin’s universal lower bounds for expected sample size of statistical inference procedures. The obtained lower bounds are used to estimate the efficiency of some selection procedures in terms of their expected sample sizes.  相似文献   

7.
8.
Heckman's (1976 Heckman, J. J. (1976). The common structure of statistical models of truncation, sample selection and limited dependent variables and a simple estimator for such models. Annals of Economic and Social Measurement 15:475492. [Google Scholar], 1979 Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica 47(1):153161.[Crossref], [Web of Science ®] [Google Scholar]) sample selection model has been employed in many studies of linear and nonlinear regression applications. It is well known that ignoring the sample selectivity may result in inconsistency of the estimator due to the correlation between the statistical errors in the selection and main equations. In this article, we reconsider the maximum likelihood estimator for the panel sample selection model in Keane et al. (1988 Keane, M., Moffitt, R., Runkle, D. (1988). Real wages over the business cycle: Estimating the impact of heterogeneity with micro data. Journal of Political Economy 96:12321266.[Crossref], [Web of Science ®] [Google Scholar]). Since the panel data model contains individual effects, such as fixed or random effects, the likelihood function is more complicated than that of the classical Heckman model. As an alternative to the existing derivation of the likelihood function in the literature, we show that the conditional distribution of the main equation follows a closed skew-normal (CSN) distribution, of which the linear transformation is still a CSN. Although the evaluation of the likelihood function involves high-dimensional integration, we show that the integration can be further simplified into a one-dimensional problem and can be evaluated by the simulated likelihood method. Moreover, we also conduct a Monte Carlo experiment to investigate the finite sample performance of the proposed estimator and find that our estimator provides reliable and quite satisfactory results.  相似文献   

9.
This paper is concerned with person parameter estimation in the binary Rasch model. The loss of efficiency of a pseudo, quasi, or composite likelihood approach investigated. By means of a Monte Carlo study, two quasi likelihood estimators are compared to two well-established maximum likelihood approaches, one of which being a weighted likelihood procedure. The results show that the observed values of the root mean squared error are practically equivalent for the compared estimators in the case of a sufficiently large number of items.  相似文献   

10.
A sample size selection procedure for paired comparisons of means is presented which controls the half width of the confidence intervals while allowing for unequal variances of treatment means.  相似文献   

11.
Hea-Jung Kim  Taeyoung Roh 《Statistics》2013,47(5):1082-1111
In regression analysis, a sample selection scheme often applies to the response variable, which results in missing not at random observations on the variable. In this case, a regression analysis using only the selected cases would lead to biased results. This paper proposes a Bayesian methodology to correct this bias based on a semiparametric Bernstein polynomial regression model that incorporates the sample selection scheme into a stochastic monotone trend constraint, variable selection, and robustness against departures from the normality assumption. We present the basic theoretical properties of the proposed model that include its stochastic representation, sample selection bias quantification, and hierarchical model specification to deal with the stochastic monotone trend constraint in the nonparametric component, simple bias corrected estimation, and variable selection for the linear components. We then develop computationally feasible Markov chain Monte Carlo methods for semiparametric Bernstein polynomial functions with stochastically constrained parameter estimation and variable selection procedures. We demonstrate the finite-sample performance of the proposed model compared to existing methods using simulation studies and illustrate its use based on two real data applications.  相似文献   

12.
This paper presents a comprehensive review and comparison of five computational methods for Bayesian model selection, based on MCMC simulations from posterior model parameter distributions. We apply these methods to a well-known and important class of models in financial time series analysis, namely GARCH and GARCH-t models for conditional return distributions (assuming normal and t-distributions). We compare their performance with the more common maximum likelihood-based model selection for simulated and real market data. All five MCMC methods proved reliable in the simulation study, although differing in their computational demands. Results on simulated data also show that for large degrees of freedom (where the t-distribution becomes more similar to a normal one), Bayesian model selection results in better decisions in favor of the true model than maximum likelihood. Results on market data show the instability of the harmonic mean estimator and reliability of the advanced model selection methods.  相似文献   

13.
A procedure for selecting a Poisson population with smallest mean is considered using an indifference zone approach. The objective is to determine the smallest sample size n required from k ≥ 2 populations in order to attain the desired probability of correct selection. Since the means procedure is not consistent with respect to the difference or ratio alone, two distance measures are used simultaneously to overcome the difficulty in obtaining the smallest probability of correct selection that is greater than some specified limit. The constants required to determine n are computed and tabulated. The asymptotic results are derived using a normal approximation. A comparison with the exact results indicates that the proposed approximation works well. Only in the extreme cases small increases in n are observed. An example of industrial accident data is used to illustrate this procedure.  相似文献   

14.
The method of Gupta (1956, 1965) was developed to select a subset from k normal populations that contains the best populations with given probability. This paper shows a duality between the general goal of selecting a subset for the best population and many-one tests. A population should be regarded as ‘candidate’ for the best population and thus retained in the subset if the samples from the other populations are not significantly better. Based on this ‘idea’ a general selection procedure is proposed using many-one tests for the comparison of each population against the remaining ones.  相似文献   

15.
ABSTRACT

It is a very important topic these days to assessing the lifetime performance of products in manufacturing or service industries. Lifetime performance indices CL is used to measure the larger-the-better type quality characteristics to evaluate the process performance for the improvement of quality and productivity. The lifetimes of products are assumed to have Burr XII distribution. The maximum likelihood estimator is used to estimate the lifetime performance index based on the progressive type I interval censored sample. The asymptotic distribution of this estimator is also developed. We use this estimator to build the new hypothesis testing algorithmic procedure with respect to a lower specification limit. Finally, two practical examples are given to illustrate the use of this testing algorithmic procedure to determine whether the process is capable.  相似文献   

16.
Given an inverse Gaussian distribution I(.μ,a2μ) with known coefficient of variation a, the hypothesis HO: .μ <ce:glyph name="dbnd6"/> μo is tested against H1: μ <ce:glyph name="dbnd6"/> μ1 using the sequential probability ratio test. The maximum of the expected sample number is shown to occur when μ is approximately equal to the geometric mean of μoand μ1 and it is shown that this maximum value depends on .μo and μ1 only through their ratio. It is observed that the test can be used to discriminate between two one-sided hypotheses.  相似文献   

17.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   

18.
19.
This paper shows that a minimax Bayes rule and shrinkage estimators can be effectively applied to portfolio selection under the Bayesian approach. Specifically, it is shown that the portfolio selection problem can result in a statistical decision problem in some situations. Following that, we present a method for solving a problem involved in portfolio selection under the Bayesian approach.  相似文献   

20.
A disease prevalence can be estimated by classifying subjects according to whether they have the disease. When gold-standard tests are too expensive to be applied to all subjects, partially validated data can be obtained by double-sampling in which all individuals are classified by a fallible classifier, and some of individuals are validated by the gold-standard classifier. However, it could happen in practice that such infallible classifier does not available. In this article, we consider two models in which both classifiers are fallible and propose four asymptotic test procedures for comparing disease prevalence in two groups. Corresponding sample size formulae and validated ratio given the total sample sizes are also derived and evaluated. Simulation results show that (i) Score test performs well and the corresponding sample size formula is also accurate in terms of the empirical power and size in two models; (ii) the Wald test based on the variance estimator with parameters estimated under the null hypothesis outperforms the others even under small sample sizes in Model II, and the sample size estimated by this test is also accurate; (iii) the estimated validated ratios based on all tests are accurate. The malarial data are used to illustrate the proposed methodologies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号