首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In reliability and lifetime testing, comparison of two groups of data is a common problem. It is often attractive, or even necessary, to make a quick and efficient decision in order to save time and costs. This paper presents a nonparametric predictive inference (NPI) approach to compare two groups, say X and Y, when one (or both) is (are) progressively censored. NPI can easily be applied to different types of progressive censoring schemes. NPI is a statistical approach based on few assumptions, with inferences strongly based on data and with uncertainty quantified via lower and upper probabilities. These inferences consider the event that the lifetime of a future unit from Y is greater than the lifetime of a future unit from X.  相似文献   

2.
3.
Summary.  Efron's biased coin design is a well-known randomization technique that helps to neutralize selection bias in sequential clinical trials for comparing treatments, while keeping the experiment fairly balanced. Extensions of the biased coin design have been proposed by several researchers who have focused mainly on the large sample properties of their designs. We modify Efron's procedure by introducing an adjustable biased coin design, which is more flexible than his. We compare it with other existing coin designs; in terms of balance and lack of predictability, its performance for small samples appears in many cases to be an improvement with respect to the other sequential randomized allocation procedures.  相似文献   

4.
ABSTRACT

Some lower and upper bounds of multivariate Gaussian probability are given based on the univariate Mills’ ratio. These bounds are sharper than known ones on the multivariate Mills’ ratio in many case.  相似文献   

5.
This article presents non-parametric predictive inference for future order statistics. Given the data consisting of n real-valued observations, m future observations are considered and predictive probabilities are presented for the rth-ordered future observation. In addition, joint and conditional probabilities for events involving multiple future order statistics are presented. The article further presents the use of such predictive probabilities for order statistics in statistical inference, in particular considering pairwise and multiple comparisons based on two or more independent groups of data.  相似文献   

6.
7.
In this note, we derive upper bounds on the variance of a mixed random variable. Our results are an extension of previous results for unimodal and symmetric random variables. The novelty of our findings is that this mixed random variable does not necessarily need to be symmetric and is multimodal. We also characterize the cases when these bounds are optimal.  相似文献   

8.
Nonparametric predictive inference (NPI) is a powerful frequentist statistical framework based only on an exchangeability assumption for future and past observations, made possible by the use of lower and upper probabilities. In this article, NPI is presented for ordinal data, which are categorical data with an ordering of the categories. The method uses a latent variable representation of the observations and categories on the real line. Lower and upper probabilities for events involving the next observation are presented, and briefly compared to NPI for non ordered categorical data. As application, the comparison of multiple groups of ordinal data is presented.  相似文献   

9.
In finance, inferences about future asset returns are typically quantified with the use of parametric distributions and single-valued probabilities. It is attractive to use less restrictive inferential methods, including nonparametric methods which do not require distributional assumptions about variables, and imprecise probability methods which generalize the classical concept of probability to set-valued quantities. Main attractions include the flexibility of the inferences to adapt to the available data and that the level of imprecision in inferences can reflect the amount of data on which these are based. This paper introduces nonparametric predictive inference (NPI) for stock returns. NPI is a statistical approach based on few assumptions, with inferences strongly based on data and with uncertainty quantified via lower and upper probabilities. NPI is presented for inference about future stock returns, as a measure for risk and uncertainty, and for pairwise comparison of two stocks based on their future aggregate returns. The proposed NPI methods are illustrated using historical stock market data.  相似文献   

10.
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning, and credit scoring. The receiver operating characteristic (ROC) surface is a useful tool to assess the ability of a diagnostic test to discriminate among three-ordered classes or groups. In this article, nonparametric predictive inference (NPI) for three-group ROC analysis for ordinal outcomes is presented. NPI is a frequentist statistical method that is explicitly aimed at using few modeling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. This article also includes results on the volumes under the ROC surfaces and consideration of the choice of decision thresholds for the diagnosis. Two examples are provided to illustrate our method.  相似文献   

11.
12.
In this paper we propose and study two sequential elimination procedures for selecting all new treatments better than a standard or control treatment. These procedures differ from those previously proposed in that we assume variances are unequal and unknown. Expressions for asymptotic expected sample sizes are given. Confidence intervals associated with the procedures are also discussed.  相似文献   

13.
This paper offers a predictive approach for the selection of a fixed number (= t) of treatments from k treatments with the goal of controlling for predictive losses. For the ith treatment, independent observations X ij (j = 1,2,…,n) can be observed where X ij ’s are normally distributed N(θ i ; σ 2). The ranked values of θ i ’s and X i ’s are θ (1) ≤ … ≤ θ (k) and X [1] ≤ … ≤ X [k] and the selected subset S = {[k], [k? 1], … , [k ? t+1]} will be considered. This paper distinguishes between two types of loss functions. A type I loss function associated with a selected subset S is the loss in utility from the selector’s view point and is a function of θ i with i ? S. A type II loss function associated with S measures the unfairness in the selection from candidates’ viewpoint and is a function of θ i with i ? S. This paper shows that under mild assumptions on the loss functions S is optimal and provides the necessary formulae for choosing n so that the two types of loss can be controlled individually or simultaneously with a high probability. Predictive bounds for the losses are provided, Numerical examples support the usefulness of the predictive approach over the design of experiment approach.  相似文献   

14.
The asymptotic power efficiency of the class of linear rank tests relative to the asymptotically most powerful rank test is derived for a two sample location and scale problem and numerical evaluations are presented for two special tests.  相似文献   

15.
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. Good methods for determining diagnostic accuracy provide useful guidance on selection of patient treatment, and the ability to compare different diagnostic tests has a direct impact on quality of care. In this paper Nonparametric Predictive Inference (NPI) methods for accuracy of diagnostic tests with continuous test results are presented and discussed. For such tests, Receiver Operating Characteristic (ROC) curves have become popular tools for describing the performance of diagnostic tests. We present the NPI approach to ROC curves, and some important summaries of these curves. As NPI does not aim at inference for an entire population but instead explicitly considers a future observation, this provides an attractive alternative to standard methods. We show how NPI can be used to compare two continuous diagnostic tests.  相似文献   

16.
This paper uses the decomposition framework from the economics literature to examine the statistical structure of treatment effects estimated with observational data compared to those estimated from randomized studies. It begins with the estimation of treatment effects using a dummy variable in regression models and then presents the decomposition method from economics which estimates separate regression models for the comparison groups and recovers the treatment effect using bootstrapping methods. This method shows that the overall treatment effect is a weighted average of structural relationships of patient features with outcomes within each treatment arm and differences in the distributions of these features across the arms. In large randomized trials, it is assumed that the distribution of features across arms is very similar. Importantly, randomization not only balances observed features but also unobserved. Applying high dimensional balancing methods such as propensity score matching to the observational data causes the distributional terms of the decomposition model to be eliminated but unobserved features may still not be balanced in the observational data. Finally, a correction for non-random selection into the treatment groups is introduced via a switching regime model. Theoretically, the treatment effect estimates obtained from this model should be the same as those from a randomized trial. However, there are significant challenges in identifying instrumental variables that are necessary for estimating such models. At a minimum, decomposition models are useful tools for understanding the relationship between treatment effects estimated from observational versus randomized data.  相似文献   

17.
A new method is proposed for drawing coherent statistical inferences about a real-valued parameter in problems where there is little or no prior information. Prior ignorance about the parameter is modelled by the set of all continuous probability density functions for which the derivative of the log-density is bounded by a positive constant. This set is translation-invariant, it contains density functions with a wide variety of shapes and tail behaviour, and it generates prior probabilities that are highly imprecise. Statistical inferences can be calculated by solving a simple type of optimal control problem whose general solution is characterized. Detailed results are given for the problems of calculating posterior upper and lower means, variances, distribution functions and probabilities of intervals. In general, posterior upper and lower expectations are achieved by prior density functions that are piecewise exponential. The results are illustrated by normal and binomial examples  相似文献   

18.
19.
In this article we have presented some of the asymptotic theorems related to one-truncation parameter family of distributions ? Comparison of performance of different estimators and other inferential problems have been tackled - Also applications of the main results have been given and illustrated their uses with examples.  相似文献   

20.
It has often been complained that the standard framework of decision theory is insufficient. In most applications, neither the maximin paradigm (relving on complete ignorance on the states of natures) nor the classical Bayesian paradigm (assuming perfect probabilistic information on the states of nature) reflect the situation under consideration adequately. Typically one possesses some, but incomplete, knowledge on the stochastic behaviour of the states of nature. In this paper first steps towards a comprehensive framework for decision making under such complex uncertainty will be provided. Common expected utility theory will be extended to interval probability, a generalized probabilistic setting which has the power to express incomplete stochastic knowledge and to take the extent of ambiguity (non-stochastic uncertainty) into account. Since two-monotone and totally monotone capacities are special cases of general interval probatility, wher Choquet integral and interval-valued expectation correspond to one another, the results also show, as a welcome by-product, how to deal efficiently with Choquet Expected Utility and how to perform a neat decision analysis in the case of belief functions. Received: March 2000; revised version: July 2001  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号