首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Process capability indices are numerical tools that quantify how well a process can meet customer requirements, specifications or engineering tolerances. Fuzzy logic is incorporated to deal imprecise, incomplete data along with uncertainty. This paper develops two fuzzy methods for measuring the process capability in simple linear profiles for the circumstances in which lower and upper specification limits are imprecise. To guide practitioners, numerical example is provided.  相似文献   

2.
In quality control, we may confront imprecise concepts. One case is a situation in which upper and lower specification limits (SLs) are imprecise. If we introduce vagueness into SLs, we face quite new, reasonable and interesting processes, and the ordinary capability indices are not appropriate for measuring the capability of these processes. In this paper, similar to the traditional process capability indices (PCIs), we develop a fuzzy analogue by a distance defined on a fuzzy limit space and introduce PCIs, where instead of precise SLs we have two membership functions for upper and lower SLs. These indices are necessary when SLs are fuzzy, and they are helpful for comparing manufacturing process with fuzzy SLs. Some interesting relations among these introduced indices are proved. Numerical examples are given to clarify the method.  相似文献   

3.
In finance, inferences about future asset returns are typically quantified with the use of parametric distributions and single-valued probabilities. It is attractive to use less restrictive inferential methods, including nonparametric methods which do not require distributional assumptions about variables, and imprecise probability methods which generalize the classical concept of probability to set-valued quantities. Main attractions include the flexibility of the inferences to adapt to the available data and that the level of imprecision in inferences can reflect the amount of data on which these are based. This paper introduces nonparametric predictive inference (NPI) for stock returns. NPI is a statistical approach based on few assumptions, with inferences strongly based on data and with uncertainty quantified via lower and upper probabilities. NPI is presented for inference about future stock returns, as a measure for risk and uncertainty, and for pairwise comparison of two stocks based on their future aggregate returns. The proposed NPI methods are illustrated using historical stock market data.  相似文献   

4.
Hierarchical models are rather common in uncertainty theory. They arise when there is a ‘correct’ or ‘ideal’ (the so-called first-order) uncertainty model about a phenomenon of interest, but the modeler is uncertain about what it is. The modeler's uncertainty is then called second-order uncertainty. For most of the hierarchical models in the literature, both the first- and the second-order models are precise, i.e., they are based on classical probabilities. In the present paper, I propose a specific hierarchical model that is imprecise at the second level, which means that at this level, lower probabilities are used. No restrictions are imposed on the underlying first-order model: that is allowed to be either precise or imprecise. I argue that this type of hierarchical model generalizes and includes a number of existing uncertainty models, such as imprecise probabilities, Bayesian models, and fuzzy probabilities. The main result of the paper is what I call precision–imprecision equivalence: the implications of the model for decision making and statistical reasoning are the same, whether the underlying first-order model is assumed to be precise or imprecise.  相似文献   

5.
The problem of approximating an interval null or imprecise hypothesis test by a point null or precise hypothesis test under a Bayesian framework is considered. In the literature, some of the methods for solving this problem have used the Bayes factor for testing a point null and justified it as an approximation to the interval null. However, many authors recommend evaluating tests through the posterior odds, a Bayesian measure of evidence against the null hypothesis. It is of interest then to determine whether similar results hold when using the posterior odds as the primary measure of evidence. For the prior distributions under which the approximation holds with respect to the Bayes factor, it is shown that the posterior odds for testing the point null hypothesis does not approximate the posterior odds for testing the interval null hypothesis. In fact, in order to obtain convergence of the posterior odds, a number of restrictive conditions need to be placed on the prior structure. Furthermore, under a non-symmetrical prior setup, neither the Bayes factor nor the posterior odds for testing the imprecise hypothesis converges to the Bayes factor or posterior odds respectively for testing the precise hypothesis. To rectify this dilemma, it is shown that constraints need to be placed on the priors. In both situations, the class of priors constructed to ensure convergence of the posterior odds are not practically useful, thus questioning, from a Bayesian perspective, the appropriateness of point null testing in a problem better represented by an interval null. The theories developed are also applied to an epidemiological data set from White et al. (Can. Veterinary J. 30 (1989) 147–149.) in order to illustrate and study priors for which the point null hypothesis test approximates the interval null hypothesis test. AMS Classification: Primary 62F15; Secondary 62A15  相似文献   

6.
In hypotheses testing, such as other statistical problems, we may confront imprecise concepts. One case is a situation in which hypotheses are imprecise. In this paper, we recall and redefine some concepts about fuzzy hypotheses testing, and then we introduce the likelihood ratio test for fuzzy hypotheses testing. Finally, we give some applied examples.  相似文献   

7.
A new method is proposed for drawing coherent statistical inferences about a real-valued parameter in problems where there is little or no prior information. Prior ignorance about the parameter is modelled by the set of all continuous probability density functions for which the derivative of the log-density is bounded by a positive constant. This set is translation-invariant, it contains density functions with a wide variety of shapes and tail behaviour, and it generates prior probabilities that are highly imprecise. Statistical inferences can be calculated by solving a simple type of optimal control problem whose general solution is characterized. Detailed results are given for the problems of calculating posterior upper and lower means, variances, distribution functions and probabilities of intervals. In general, posterior upper and lower expectations are achieved by prior density functions that are piecewise exponential. The results are illustrated by normal and binomial examples  相似文献   

8.
Pre-specification of the primary analysis model is a pre-requisite to control the family-wise type-I-error rate (T1E) at the intended level in confirmatory clinical trials. However, mixed models for repeated measures (MMRM) have been shown to be poorly specified in study protocols. The magnitude of a resulting T1E rate inflation is still unknown. This investigation aims to quantify the magnitude of the T1E rate inflation depending on the type and number of unspecified model items as well as different trial characteristics. We simulated a randomized, double-blind, parallel group, phase III clinical trial under the assumption that there is no treatment effect at any time point. The simulated data was analysed using different clusters, each including several MMRMs that are compatible with the imprecise pre-specification of the MMRM. T1E rates for each cluster were estimated. A significant T1E rate inflation could be shown for ambiguous model specifications with a maximum T1E rate of 7.6% [7.1%; 8.1%]. The results show that the magnitude of the T1E rate inflation depends on the type and number of unspecified model items as well as the sample size and allocation ratio. The imprecise specification of nuisance parameters may not lead to a significant T1E rate inflation. However, the results of this simulation study rather underestimate the true T1E rate inflation. In conclusion, imprecise MMRM specifications may lead to a substantial inflation of the T1E rate and can damage the ability to generate confirmatory evidence in pivotal clinical trials.  相似文献   

9.
A lifetime capability index L tp has been proposed to measure the business lifetime performance, wherein output lifetime measurements are assumed to be precise from the Pareto model with censored information. In the present study, we study a more realistic situation where the lifetime output data are imprecise. The approach developed by Buckley [Fuzzy system, Soft Comput. 9 (2005), pp. 757–760; Fuzzy statistics: Regression and prediction, Soft Comput. 9 (2005), pp. 769–775] incorporated with some extensions (a set of confidence intervals, one on top of the other), is used to construct the triangular-shaped fuzzy number for the fuzzy estimate of the L tp. With the sampling distribution of the unbiased estimator of the L tp, two useful fuzzy inference criteria, its critical value and fuzzy p-value are obtained to assess the lifetime performance. The presented methodology can handle the lifetime performance assessment on the condition that sample lifetime data are involved with imprecise information, classifying the lifetime performance with the three-decision rule. With different preset requirements and a certain degree of imprecise data, we also develop a four quadrants decision-making plot where managers can easily simultaneously visualize several important features of lifetime performance for making a decision. An example of business lifetime data is given to illustrate the applicability of the proposed method.  相似文献   

10.
We discuss how lower previsions induced by multi-valued mappings fit into the framework of the behavioural theory of imprecise probabilities, and show how the notions of coherence and natural extension from that theory can be used to prove and generalise existing results in an elegant and straightforward manner. This provides a clear example for their explanatory and unifying power.  相似文献   

11.
A lower bound for the Bayes risk in the sequential case is given under the regularity conditions. A related result to the minimax risk is also discussed. Further. some examples are given for the exponential and Poisson distributions.  相似文献   

12.
This paper develops clinical trial designs that compare two treatments with a binary outcome. The imprecise beta class (IBC), a class of beta probability distributions, is used in a robust Bayesian framework to calculate posterior upper and lower expectations for treatment success rates using accumulating data. The posterior expectation for the difference in success rates can be used to decide when there is sufficient evidence for randomized treatment allocation to cease. This design is formally related to the randomized play‐the‐winner (RPW) design, an adaptive allocation scheme where randomization probabilities are updated sequentially to favour the treatment with the higher observed success rate. A connection is also made between the IBC and the sequential clinical trial design based on the triangular test. Theoretical and simulation results are presented to show that the expected sample sizes on the truly inferior arm are lower using the IBC compared with either the triangular test or the RPW design, and that the IBC performs well against established criteria involving error rates and the expected number of treatment failures.  相似文献   

13.
14.
We show how mutually utility independent hierarchies, which weigh the various costs of an experiment against benefits expressed through a mixed Bayes linear utility representing the potential gains in knowledge from the experiment, provide a flexible and intuitive methodology for experimental design which remains tractable even for complex multivariate problems. A key feature of the approach is that we allow imprecision in the trade-offs between the various costs and benefits. We identify the Pareto optimal designs under the imprecise specification and suggest a criterion for selecting between such designs. The approach is illustrated with respect to an experiment related to the oral glucose tolerance test.  相似文献   

15.
Summary.  A common problem with laboratory assays is that a measurement of a substance in a test sample becomes relatively imprecise as the concentration decreases. A standard solution is to establish lower limits for reliable measurement. A quantitation limit is a level above which a measurement has sufficient precision to be reliably reported. The paper proposes a new approach to defining the limit of quantitation for the case where a linear calibration curve is used to estimate actual concentrations from measured values. The approach is based on the relative precision of the estimated concentration, using the delta method to approximate the precision. A graphical display is proposed for the assessment of estimated concentrations, as well as the overall reliability of the calibration curve. Our research is motivated by a clinical inhalation experiment. Comparisons are made between the approach proposed and two standard methods, using both real and simulated data.  相似文献   

16.
Estimation of the mean θ of a spherical distribution with prior knowledge concerning the norm ||θ|| is considered. The best equivariant estimator is obtained for the local problem ||θ|| = λ0, and its risk is evaluated. This yields a sharp lower bound for the risk functions of a large class of estimators. The risk functions of the best equivariant estimator and the best linear estimator are compared under departures from the assumption ||θ|| = λ0.  相似文献   

17.
基于2007年1月至2017年12月月度数据,本文首先选取金融机构极值风险、金融体系间的传染效应、金融市场的波动性和不稳定性、流动性和信用风险4个层面的14个代表性指标测度了系统性金融风险;然后运用分位数回归度量了单个系统性风险指标对宏观经济的影响;最后运用偏最小二乘分位数回归法构建一个系统性金融风险综合指标进一步实证分析系统性金融风险对宏观经济的影响。研究结果表明:①单个系统性金融风险指数中机构极值风险类别下的指标对宏观经济的影响最大,其中金融体系巨灾风险指数影响效果最显著;②运用偏最小二乘分位数回归构造的系统性金融风险综合指标较之单个系统性金融风险指标,能够更稳健地反映系统性金融风险对宏观经济的影响状况;③从测度效果来看,单个系统性风险指标和系统性金融风险综合指标在下尾分布(0.2分位数)的结果明显优于中间分布(0.5分位数)和上尾分布(0.8分位数)。  相似文献   

18.
19.
In this article, the preliminary test estimator is considered under the BLINEX loss function. The problem under consideration is the estimation of the location parameter from a normal distribution. The risk under the null hypothesis for the preliminary test estimator, the exact risk function for restricted maximum likelihood and approximated risk function for the unrestricted maximum likelihood estimator, are derived under BLINEX loss and the different risk structures are compared to one another both analytically and computationally. As a motivation on the use of BLINEX rather than LINEX, the risk for the preliminary test estimator under BLINEX loss is compared to the risk of the preliminary test estimator under LINEX loss and it is shown that the LINEX expected loss is higher than BLINEX expected loss. Furthermore, two feasible Bayes estimators are derived under BLINEX loss, and a feasible Bayes preliminary test estimator is defined and compared to the classical preliminary test estimator.  相似文献   

20.
ABSTRACT

This paper extends the classical methods of analysis of a two-way contingency table to the fuzzy environment for two cases: (1) when the available sample of observations is reported as imprecise data, and (2) the case in which we prefer to categorize the variables based on linguistic terms rather than as crisp quantities. For this purpose, the α-cuts approach is used to extend the usual concepts of the test statistic and p-value to the fuzzy test statistic and fuzzy p-value. In addition, some measures of association are extended to the fuzzy version in order to evaluate the dependence in such contingency tables. Some practical examples are provided to explain the applicability of the proposed methods in real-world problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号