首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent results in information theory, see Soofi (1996; 2001) for a review, include derivations of optimal information processing rules, including Bayes' theorem, for learning from data based on minimizing a criterion functional, namely output information minus input information as shown in Zellner (1988; 1991; 1997; 2002). Herein, solution post data densities for parameters are obtained and studied for cases in which the input information is that in (1) a likelihood function and a prior density; (2) only a likelihood function; and (3) neither a prior nor a likelihood function but only input information in the form of post data moments of parameters, as in the Bayesian method of moments approach. Then it is shown how optimal output densities can be employed to obtain predictive densities and optimal, finite sample structural coefficient estimates using three alternative loss functions. Such optimal estimates are compared with usual estimates, e.g., maximum likelihood, two-stage least squares, ordinary least squares, etc. Some Monte Carlo experimental results in the literature are discussed and implications for the future are provided.  相似文献   

2.
In quality control, we may confront imprecise concepts. One case is a situation in which upper and lower specification limits (SLs) are imprecise. If we introduce vagueness into SLs, we face quite new, reasonable and interesting processes, and the ordinary capability indices are not appropriate for measuring the capability of these processes. In this paper, similar to the traditional process capability indices (PCIs), we develop a fuzzy analogue by a distance defined on a fuzzy limit space and introduce PCIs, where instead of precise SLs we have two membership functions for upper and lower SLs. These indices are necessary when SLs are fuzzy, and they are helpful for comparing manufacturing process with fuzzy SLs. Some interesting relations among these introduced indices are proved. Numerical examples are given to clarify the method.  相似文献   

3.
A lifetime capability index L tp has been proposed to measure the business lifetime performance, wherein output lifetime measurements are assumed to be precise from the Pareto model with censored information. In the present study, we study a more realistic situation where the lifetime output data are imprecise. The approach developed by Buckley [Fuzzy system, Soft Comput. 9 (2005), pp. 757–760; Fuzzy statistics: Regression and prediction, Soft Comput. 9 (2005), pp. 769–775] incorporated with some extensions (a set of confidence intervals, one on top of the other), is used to construct the triangular-shaped fuzzy number for the fuzzy estimate of the L tp. With the sampling distribution of the unbiased estimator of the L tp, two useful fuzzy inference criteria, its critical value and fuzzy p-value are obtained to assess the lifetime performance. The presented methodology can handle the lifetime performance assessment on the condition that sample lifetime data are involved with imprecise information, classifying the lifetime performance with the three-decision rule. With different preset requirements and a certain degree of imprecise data, we also develop a four quadrants decision-making plot where managers can easily simultaneously visualize several important features of lifetime performance for making a decision. An example of business lifetime data is given to illustrate the applicability of the proposed method.  相似文献   

4.
Over a few decades, regression model has received considerable attention and has been shown to be successful when applied together with other models. One of the most successful models is the sample selection model or the selectivity model. However, uncertainties and ambiguities do exist in the models, particularly the relationship between the endogenous and exogenous variables. Therefore, it will disrupt the ability and effectiveness of the model proceeded to give the estimated value that can explain the actual situation of a phenomenon. These are the questions and problems that are yet to be explored and the main aim of this study. A new framework for estimation of the sample selection model using the concept of fuzzy modelling is introduced. In this approach, a flexible fuzzy concept hybrid with the parametric sample selection model is known as fuzzy parametric sample selection model (FPSSM). The elements of vagueness and uncertainty in the models are represented in the model construction, as a way of increasing the available information to produce a more accurate model. This led to the development of the convergence theorem presented in the form of triangular fuzzy numbers to be used in the model. Consistency is an indicator of effectiveness of the developed models and justified using Monte Carlo simulation. Consistency and efficiency of the proposed model are considered in this study. In order to achieve that condition, a Monte Carlo simulation is used. Hence, the error terms of FPSSM are assumed to follow the normal and the chi-square distributions. Simulation results show that FPSSM is consistent and efficient when its distributions are normal. Instead, the FPSSM by chi-square distribution is found to be inconsistent.  相似文献   

5.
Real lifetime data are never precise numbers but more or less non-precise, also called fuzzy. This kind of imprecision is connected with all measurement results of continuous variables, therefore also with time observations. Imprecision is different from errors and variability. Therefore estimation methods for reliability characteristics have to be adapted to the situation of fuzzy lifetimes in order to obtain realistic results.  相似文献   

6.
Horvitz and Thompson's (HT) [1952. A generalization of sampling without replacement from a finite universe. J. Amer. Statist. Assoc. 47, 663–685] well-known unbiased estimator for a finite population total admits an unbiased estimator for its variance as given by [Yates and Grundy, 1953. Selection without replacement from within strata with probability proportional to size. J. Roy. Statist. Soc. B 15, 253–261], provided the parent sampling design involves a constant number of distinct units in every sample to be chosen. If the design, in addition, ensures uniform non-negativity of this variance estimator, Rao and Wu [1988. Resampling inference with complex survey data. J. Amer. Statist. Assoc. 83, 231–241] have given their re-scaling bootstrap technique to construct confidence interval and to estimate mean square error for non-linear functions of finite population totals of several real variables. Horvitz and Thompson's estimators (HTE) are used to estimate the finite population totals. Since they need to equate the bootstrap variance of the bootstrap estimator to the Yates and Grundy's estimator (YGE) for the variance of the HTE in case of a single variable, i.e., in the linear case the YG variance estimator is required to be positive for the sample usually drawn.  相似文献   

7.
This paper extends the Wilcoxon signed-rank test to the case where the available observations are imprecise quantities, rather than crisp. To do this, the associated test statistic is extended, using the α-cuts approach. In addition, the concept of critical value is generalized to the case when the significance level is given by a fuzzy number. Finally, to accept or reject the null hypothesis of interest, a preference degree between two fuzzy sets is employed for comparing the observed fuzzy test statistic and fuzzy critical value.  相似文献   

8.
Construction methods for prior densities are investigated from a predictive viewpoint. Predictive densities for future observables are constructed by using observed data. The simultaneous distribution of future observables and observed data is assumed to belong to a parametric submodel of a multinomial model. Future observables and data are possibly dependent. The discrepancy of a predictive density to the true conditional density of future observables given observed data is evaluated by the Kullback-Leibler divergence. It is proved that limits of Bayesian predictive densities form an essentially complete class. Latent information priors are defined as priors maximizing the conditional mutual information between the parameter and the future observables given the observed data. Minimax predictive densities are constructed as limits of Bayesian predictive densities based on prior sequences converging to the latent information priors.  相似文献   

9.
In this paper we consider proper block designs and derive an upper bound for the number of blocks which can have a fixed number of symbols common with a given block of the design. To arrive at the desired bound, a generalization of an integer programming theorem due to Bush (1976) is first obtained. The integer programming theorem is then used to derive the main result of this paper. The bound given here is then compared with a similar bound obtained by Kageyama and Tsuji (1977).  相似文献   

10.
A methodology is developed for estimating consumer acceptance limits on a sensory attribute of a manufactured product. In concept these limits are analogous to engineering tolerances. The method is based on a generalization of Stevens' Power Law. This generalized law is expressed as a nonlinear statistical model. Instead of restricting the analysis to this particular case, a strategy is discussed for evaluating nonlinear models in general since scientific models are frequently of nonlinear form. The strategy focuses on understanding the geometrical contrasts between linear and nonlinear model estimation and assessing the bias in estimation and the departures from a Gaussian sampling distribution. Computer simulation is employed to examine the behavior of nonlinear least squares estimation. In addition to the usual Gaussian assumption, a bootstrap sample reuse procedure and a general triangular distribution are introduced for evaluating the effects of a non-Gaussian or asymmetrical error structure. Recommendations are given for further model analysis based on the simulation results. In the case of a model for which estimation bias is not a serious issue, estimating functions of the model are considered. Application of these functions to the generalization of Stevens’ Power Law leads to a means for defining and estimating consumer acceptance limits, The statistical form of the law and the model evaluation strategy are applied to consumer research data. Estimation of consumer acceptance limits is illustrated and discussed.  相似文献   

11.
Life time data analysis is regarded as one of the significant out-shoots of statistics. Classical statistical techniques reckon life time observations as precise numbers and solely cover variation among the observations. In fact, there are two types of uncertainty in data: variation among the observations and the fuzziness. To this effect, the analysis techniques, which do not consider fuzziness and are only based on precise life time observations, use incomplete information; hence lead to pseudo results. This study aimed at generalizing parameters estimation, survival functions, and hazard rates for fuzzy life time data.  相似文献   

12.
Thompson (1997) considered a wide definition of p-value and found the Baves p-value for testing a ooint null hypothesis H0: θ= θ0 versus H1: θ ≠ θ0. In this paper, the general case of testing H0: θ ∈ ?0 versus H1: θ ∈ ?c 0 is studied. A generalization of the concept of p-value is given, and it is proved that the posterior predictive p-value based on the posterior odds is (asymptotically) a Bayes p-value. Finally, it is suggested that this posterior predictive p-value could be used as a reference p-value  相似文献   

13.
In this study, classical and Bayesian inference methods are introduced to analyze lifetime data sets in the presence of left censoring considering two generalizations of the Lindley distribution: a first generalization proposed by Ghitany et al. [Power Lindley distribution and associated inference, Comput. Statist. Data Anal. 64 (2013), pp. 20–33], denoted as a power Lindley distribution and a second generalization proposed by Sharma et al. [The inverse Lindley distribution: A stress–strength reliability model with application to head and neck cancer data, J. Ind. Prod. Eng. 32 (2015), pp. 162–173], denoted as an inverse Lindley distribution. In our approach, we have used a distribution obtained from these two generalizations denoted as an inverse power Lindley distribution. A numerical illustration is presented considering a dataset of thyroglobulin levels present in a group of individuals with differentiated cancer of thyroid.  相似文献   

14.
Tests for randomness of observations that involve one factor have been considered by many authors among them are Mosteller [4], Bateman [21], Barton and David [1] and Shaughnessy [7]. However, on many occasions, data involve two different factors such as time and location, temperature and pressure, or levels of doses and responses of patients and so on. In this paper, we consider tests for randomness of observations that involve two factors for which data are given in a matrix form. Some new definitions of runs of a matrix of data are defined and discussed. A special kind of run is proposed for the test for randomness. Distributions and properties of this run are studied. Some critical regions are tabulated.  相似文献   

15.
In the simple and widely used method of Box–Muller [G. Box and M. Muller, A note on the generation of random normal deviates, Ann. Math. Statist. 29 (1958), pp. 610–611], from a pair of uniform and independent random variables in (0,1), a pair of standard and independent normal variables is obtained. In this article, we present a very simple and elegant generalization of this method to obtain a pair of correlated standard normal variables with a given coefficient of correlation. This generalized method, which is computationally very easy, is interpreted in geometric terms, considering a translation of the uniform interval (0,1) and a rotation of a defined angle, both related to the coefficient of correlation. Some numerical results are simulated and statistically analysed, proving that the generalization is extremely simple and powerful.  相似文献   

16.
The use of Bayesian models for the reconstruction of images degraded by both some blurring function H and the presence of noise has become popular in recent years. Making an analogy between classical degradation processes and resampling, we propose a Bayesian model for generating finer resolution images. The approach involves defining resampling, or aggregation, as a linear operator applied to an original picture to produce derived lower resolution data which represent our available experimental infor-mation. Within this framework, the operation of making inference on the orginal data can be viewed as an inverse linear transformation problem. This problem, formalized through Bayes' theorem, can be solved by the classical maximum a posteriori estimation procedure. Image local characteristics are assumed to follow a Gaussian Markov random field. Under some mild assumptions, simple, iterative and local operations are involved, making parallel 'relaxation' processing feasible. experimental results are shown on some images, for which good subsampling estimates are obtained.  相似文献   

17.
The quality characteristics, which are known as attributes, cannot be conveniently and numerically represented. Generally, the attribute data can be regarded as the fuzzy data, which are ubiquitous in the manufacturing process and cannot be measured precisely and often be collected by visual inspection. In this paper, we construct a p control chart for monitoring the fraction of nonconforming items in the process in which fuzzy sample data are collected from the manufacturing process. The resolution identity – a well-known theorem in the fuzzy set theory – is invoked to construct the control limits of fuzzy-p control charts using fuzzy data. In order to determine whether the plotted imprecise fraction of nonconforming items is within the fuzzy lower and upper control limits, we also propose a ranking method for a set of fuzzy numbers. Using the fuzzy-p control charts and the proposed acceptability function to classify the manufacturing process allows the decision-maker to make linguistic decisions such as rather in control or rather out of control. A practical example is provided to describe the applicability of the fuzzy set theory to a conventional p control chart.  相似文献   

18.
This paper presents a Bayesian non-parametric approach to survival analysis based on arbitrarily right censored data. The analysis is based on posterior predictive probabilities using a Polya tree prior distribution on the space of probability measures on [0, ∞). In particular we show that the estimate generalizes the classical Kaplanndash;Meier non-parametric estimator, which is obtained in the limiting case as the weight of prior information tends to zero.  相似文献   

19.
This article discusses generalization of the well-known multivariate rank statistics under right-censored data case. Empirical process representation used to get the generalization. The marginal distribution functions are estimated by Kaplan–Meier estimators. Sufficient conditions for asymptotic normality of the generalized multivariate rank statistics under independently right censored data are specified. Several auxiliary results on sup-norm convergence of Kaplan–Meier estimators in randomly exhausting regions are given too.  相似文献   

20.
Bayesian methods are increasingly used in proof‐of‐concept studies. An important benefit of these methods is the potential to use informative priors, thereby reducing sample size. This is particularly relevant for treatment arms where there is a substantial amount of historical information such as placebo and active comparators. One issue with using an informative prior is the possibility of a mismatch between the informative prior and the observed data, referred to as prior‐data conflict. We focus on two methods for dealing with this: a testing approach and a mixture prior approach. The testing approach assesses prior‐data conflict by comparing the observed data to the prior predictive distribution and resorting to a non‐informative prior if prior‐data conflict is declared. The mixture prior approach uses a prior with a precise and diffuse component. We assess these approaches for the normal case via simulation and show they have some attractive features as compared with the standard one‐component informative prior. For example, when the discrepancy between the prior and the data is sufficiently marked, and intuitively, one feels less certain about the results, both the testing and mixture approaches typically yield wider posterior‐credible intervals than when there is no discrepancy. In contrast, when there is no discrepancy, the results of these approaches are typically similar to the standard approach. Whilst for any specific study, the operating characteristics of any selected approach should be assessed and agreed at the design stage; we believe these two approaches are each worthy of consideration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号