首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
The problem is to classify an individual into one of two populations based on an observation on the individual which follows a stationary Gaussian process and the populations are two distinct time points. Plug-in likelihood ratio rules are considered using samples from the process. The distribution of associated classification statistics are derived. For the special case when the mis-classification probabilities are equal, the nature of dependence between the population distributions on the probability of correct classification is studied. Lower bounds and iterative method of evaluation of the optimal correlation between the populations are obtained.  相似文献   

2.
This paper presents results concerning the implementation of two estimators for the total of a finite populations each of which is optimal under either and additive are purely interaction model. Assumptions under which the estimators are derived, some mathematical properties of the estimators, and tables which compare the estimators and give optimal allocation rules as a function of relevant parameters are given.  相似文献   

3.
In this paper, we suggest classification procedures of an observation into one of two exponential populations assuming a known ordering between population parameters. We propose classification rules when either location or scale parameters are ordered. Some of these classification rules under ordering are better than usual classification rules with respect to the expected probability of correct classification. We also derive likelihood ratio-based classification rules. Comparison of these classification rules has been done using Monte Carlo simulations.  相似文献   

4.
The problem of selecting the best population from among a finite number of populations in the presence of uncertainty is a problem one faces in many scientific investigations, and has been studied extensively, Many selection procedures have been derived for different selection goals. However, most of these selection procedures, being frequentist in nature, don't tell how to incorporate the information in a particular sample to give a data-dependent measure of correct selection achieved for this particular sample. They often assign the same decision and probability of correct selection for two different sample values, one of which actually seems intuitively much more conclusive than the other. The methodology of conditional inference offers an approach which achieves both frequentist interpret ability and a data-dependent measure of conclusiveness. By partitioning the sample space into a family of subsets, the achieved probability of correct selection is computed by conditioning on which subset the sample falls in. In this paper, the partition considered is the so called continuum partition, while the selection rules are both the fixed-size and random-size subset selection rules. Under the distributional assumption of being monotone likelihood ratio, results on least favourable configuration and alpha-correct selection are established. These re-sults are not only useful in themselves, but also are used to design a new sequential procedure with elimination for selecting the best of k Binomial populations. Comparisons between this new procedure and some other se-quential selection procedures with regard to total expected sample size and some risk functions are carried out by simulations.  相似文献   

5.
The problem considered here is to classify a unit into one of two populations based on a vector of measurements on the unit. The observation vector is assumed to follow an auto-regressive process. Samples from the process are used to construct classification rules. The distributions of some classification statistics are obtained. The admissibility of some classification rules is established.  相似文献   

6.
The problem of selecting good populations out of k normal populations is considered in a Bayesian framework under exchangeable normal priors and additive loss functions. Some basic approximations to the Bayes rules are discussed. These approximations suggest that some well-known classical rules are "approximate" Bayes rules. Especially, it is shown that Gupta-type rules are extended Bayes with respect to a family of the exchangeable normal priors for any bounded and additive loss function. Furthermore, for a simple loss function, the results of a Monte Carlo comparison of Gupta-type rules and Seal-type rules are presented. They indicate that, in general, Gupta-type rules perform better than Seal-type rules  相似文献   

7.
In many applied classification problems, the populations of interest are defined in terms of ranges for the dependent variable. In these situations, it is intuitively appealing to classify individuals into the respective populations based on their estimated conditional expectation. On the other hand, based on theoretical considerations, one may wish to use the classification rule based on the posterior probabilities. This article shows that under certain conditions these two classification rules are equivalent.  相似文献   

8.
The two populations considered for this study are two distinct time points. Samples consist of observations made at both the time points on every sampling unit. The unit to be classified is observed at one of the two time points. The observation vectors contain covariates, having same expectation at both the time points. In this set-up admissibility of some likelihood ratio rules is established.  相似文献   

9.
Two-sample comparisons belonging to basic class of statistical inference are extensively applied in practice. There is a rich statistical literature regarding different parametric methods to address these problems. In this context, most of the powerful techniques are assumed to be based on normally distributed populations. In practice, the alternative distributions of compared samples are commonly unknown. In this case, one can propose a combined test based on the following decision rules: (a) the likelihood-ratio test (LRT) for equality of two normal populations and (b) the Shapiro–Wilk (S-W) test for normality. The rules (a) and (b) can be merged by, e.g., using the Bonferroni correction technique to offer the correct comparison of the samples distribution. Alternatively, we propose the exact density-based empirical likelihood (DBEL) ratio test. We develop the tsc package as the first R package available to perform the two-sample comparisons using the exact test procedures: the LRT; the LRT combined with the S-W test; as well as the newly developed DBEL ratio test. We demonstrate Monte Carlo (MC) results and a real data example to show an efficiency and excellent applicability of the developed procedure.  相似文献   

10.
Two nonparametric classification rules for e-univariace populations are proposed, one in which the probability of correct classification is a specified number and the other in which one has to evaluate the probability of correct classification. In each case the classification is with respect to the Chernoff and Savage (1958) class of statistics, with possible specialization to populations having different location shifts and different changes of scale. An optimum property, namely the consistency of the classification procedure, is established for the second rule, when the distributions are either fixed or “near” in the Pitman sense and are tending to a common distribution at a specified rate. A measure of asymptotic efficiency is defined for the second rule and its asymptotic efficiency based on the Chernoff-Savage class of statistics relative to the parametric competitors ie the case of location shifts and scale changes is shown to be equal to the analogous Pitman efficiency.  相似文献   

11.
Sequential methods for choosing the better of two Bernoulli populations are discussed using a Bayesian framework and when the maximum number of observations is fixed. Performance characteristics of the designs are obtained by using Monte Carlo simulation. Several sampling rules are considered, together with a stopping rule due to Bechhofer and Kulkarni (1982) and some modifications which use posterior estimates of the unknown probabilities.  相似文献   

12.
Consider a longitudinal experiment where subjects are allocated to one of two treatment arms and are subjected to repeated measurements over time. Two non-parametric group sequential procedures, based on the Wilcoxon rank sum test and fitted with asymptotically efficient allocation rules, are derived to test the equality of the rates of change over time of the two treatments, when the distribution of responses is unknown. The procedures are designed to allow for early stopping to reject the null hypothesis while allocating less subjects to the inferior treatment. Simulations – based on the normal, the logistic and the exponential distributions – showed that the proposed allocation rules substantially reduce allocations to the inferior treatment, but at the expense of a relatively small increase in the total sample size and a moderate decrease in power as compared to the pairwise allocation rule.  相似文献   

13.
A compound decision problem with component decision problem being the classification of a random sample as having come from one of the finite number of univariate populations is investigated. The Bayesian approach is discussed. A distribution–free decision rule is presented which has asymptotic risk equal to zero. The asymptotic efficiencies of these rules are discussed.

The results of a compter simulation are presented which compares the Bayes rule to the distribution–free rule under the assumption of normality. It is found that the distribution–free rule can be recommended in situations where certain key lo cation parameters are not known precisely and/or when certain distributional assumptions are not satisfied.  相似文献   

14.
The normal linear discriminant rule (NLDR) and the normal quadratic discriminant rule (NQDR) are popular classifiers when working with normal populations. Several papers in the literature have been devoted to a comparison of these rules with respect to classification performance. An aspect which has, however, not received any attention is the effect of an initial variable selection step on the relative performance of these classification rules. Cross model validation variabie selection has been found to perform well in the linear case, and can be extended to the quadratic case. We report the results of a simulation study comparing the NLDR and the NQDR with respect to the post variable selection classification performance. It is of interest that the NQDR generally benefits from an initial variable selection step. We also comment briefly on the problem of estimating the post selection error rates of the two rules.  相似文献   

15.
The problem of classification into two univariate normal populations with a common mean is considered. Several classification rules are proposed based on efficient estimators of the common mean. Detailed numerical comparisons of probabilities of misclassifications using these rules have been carried out. It is shown that the classification rule based on the Graybill-Deal estimator of the common mean performs the best. Classification rules are also proposed for the case when variances are assumed to be ordered. Comparison of these rules with the rule based on the Graybill-Deal estimator has been done with respect to individual probabilities of misclassification.  相似文献   

16.
This paper deals with the problem of selecting the “best” population from a given number of populations in a decision theoretic framework. The class of selection rules considered is based on a suitable partition of the sample space. A selection rule is given which is shown to have certain optimum properties among the selection rules in the given class for a mal rules are known.  相似文献   

17.
The problem of selecting exponential populations better than a control under a simple ordering prior is investigated. Based on some prior information, it is appropriate to set lower bounds for the concerned parameters. The information about the lower bounds of the concerned parameters is taken into account to derive isotonic selection rules for the control known case. An isotonic selection rule for the control unknown case is also proposed. A criterion is proposed to evaluate the performance of the selection rules. Simulation comparisons among the performances of several selection rules are carried out. The simulation results indicate that for the control known case, the new proposed selection rules perform better than some earlier existing selection rules.  相似文献   

18.
An up-and-down (UD) experiment for estimating a given quantile of a binary response curve is a sequential procedure whereby at each step a given treatment level is used and, according to the outcome of the observations, a decision is made (deterministic or randomized) as to whether to maintain the same treatment or increase it by one level or else to decrease it by one level. The design points of such UD rules generate a Markov chain and the mode of its invariant distribution is an approximation to the quantile of interest. The main area of application of UD algorithms is in Phase I clinical trials, where it is of greatest importance to be able to attain reliable results in small-size experiments. In this paper we address the issues of the speed of convergence and the precision of quantile estimates of such procedures, both in theory and by simulation. We prove that the version of UD designs introduced in 1994 by Durham and Flournoy can in a large number of cases be regarded as optimal among all UD rules. Furthermore, in order to improve on the convergence properties of this algorithm, we propose a second-order UD experiment which, instead of making use of just the most recent observation, bases the next step on the outcomes of the last two. This procedure shares a number of desirable properties with the corresponding first-order designs, and also allows greater flexibility. With a suitable choice of the parameters, the new scheme is at least as good as the first-order one and leads to an improvement of the quantile estimates when the starting point of the algorithm is low relative to the target quantile.  相似文献   

19.
Models in which the number of goals scored by a team in a soccer match follow a Poisson distribution, or a closely related one, have been widely discussed. We here consider a soccer match as an experiment to assess which of two teams is superior and examine the probability that the outcome of the experiment (match) truly represents the relative abilities of the two teams. Given a final score, it is possible by using a Bayesian approach to quantify the probability that it was or was not the case that ‘the best team won’. For typical scores, the probability of a misleading result is significant. Modifying the rules of the game to increase the typical number of goals scored would improve the situation, but a level of confidence that would normally be regarded as satisfactory could not be obtained unless the character of the game was radically changed.  相似文献   

20.
We study the problem of classification with multiple q-variate observations with and without time effect on each individual. We develop new classification rules for populations with certain structured and unstructured mean vectors and under certain covariance structures. The new classification rules are effective when the number of observations is not large enough to estimate the variance–covariance matrix. Computational schemes for maximum likelihood estimates of required population parameters are given. We apply our findings to two real data sets as well as to a simulated data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号