首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bayesian network (BN) is an efficient graphical method that uses directed acyclic graphs (DAG) to provide information about a set of data. BNs consist of nodes and arcs (or edges) where nodes represent variables and arcs represent relations and influences between nodes. Interest in organic food has been increasing in the world during the last decade. The same trend is also valid in Turkey. Although there are numerous studies that deal with customer perception of organic food and customer characteristics, none of them used BNs. Thus, this study, which shows a new application area of BNs, aims to reveal the perception and characteristics of organic food buyers. In this work, a survey is designed and applied in seven different organic bazaars in Turkey. Afterwards, BNs are constructed with the data gathered from 611 organic food consumers. The findings match with the previous studies as factors such as health, environmental factors, food availability, product price, consumers' income and trust to organization are found to influence consumers effectively.  相似文献   

2.
A Bayesian network (BN) is a probabilistic graphical model that represents a set of variables and their probabilistic dependencies. Formally, BNs are directed acyclic graphs whose nodes represent variables, and whose arcs encode the conditional dependencies among the variables. Nodes can represent any kind of variable, be it a measured parameter, a latent variable, or a hypothesis. They are not restricted to represent random variables, which form the “Bayesian” aspect of a BN. Efficient algorithms exist that perform inference and learning in BNs. BNs that model sequences of variables are called dynamic BNs. In this context, [A. Harel, R. Kenett, and F. Ruggeri, Modeling web usability diagnostics on the basis of usage statistics, in Statistical Methods in eCommerce Research, W. Jank and G. Shmueli, eds., Wiley, 2008] provide a comparison between Markov Chains and BNs in the analysis of web usability from e-commerce data. A comparison of regression models, structural equation models, and BNs is presented in Anderson et al. [R.D. Anderson, R.D. Mackoy, V.B. Thompson, and G. Harrell, A bayesian network estimation of the service–profit Chain for transport service satisfaction, Decision Sciences 35(4), (2004), pp. 665–689]. In this article we apply BNs to the analysis of customer satisfaction surveys and demonstrate the potential of the approach. In particular, BNs offer advantages in implementing models of cause and effect over other statistical techniques designed primarily for testing hypotheses. Other advantages include the ability to conduct probabilistic inference for prediction and diagnostic purposes with an output that can be intuitively understood by managers.  相似文献   

3.
Inference in hybrid Bayesian networks using dynamic discretization   总被引:1,自引:0,他引:1  
We consider approximate inference in hybrid Bayesian Networks (BNs) and present a new iterative algorithm that efficiently combines dynamic discretization with robust propagation algorithms on junction trees. Our approach offers a significant extension to Bayesian Network theory and practice by offering a flexible way of modeling continuous nodes in BNs conditioned on complex configurations of evidence and intermixed with discrete nodes as both parents and children of continuous nodes. Our algorithm is implemented in a commercial Bayesian Network software package, AgenaRisk, which allows model construction and testing to be carried out easily. The results from the empirical trials clearly show how our software can deal effectively with different type of hybrid models containing elements of expert judgment as well as statistical inference. In particular, we show how the rapid convergence of the algorithm towards zones of high probability density, make robust inference analysis possible even in situations where, due to the lack of information in both prior and data, robust sampling becomes unfeasible.  相似文献   

4.
Bayesian networks for imputation   总被引:1,自引:0,他引:1  
Summary.  Bayesian networks are particularly useful for dealing with high dimensional statistical problems. They allow a reduction in the complexity of the phenomenon under study by representing joint relationships between a set of variables through conditional relationships between subsets of these variables. Following Thibaudeau and Winkler we use Bayesian networks for imputing missing values. This method is introduced to deal with the problem of the consistency of imputed values: preservation of statistical relationships between variables ( statistical consistency ) and preservation of logical constraints in data ( logical consistency ). We perform some experiments on a subset of anonymous individual records from the 1991 UK population census.  相似文献   

5.
Frequentist and Bayesian methods differ in many aspects but share some basic optimal properties. In real-life prediction problems, situations exist in which a model based on one of the above paradigms is preferable depending on some subjective criteria. Nonparametric classification and regression techniques, such as decision trees and neural networks, have both frequentist (classification and regression trees (CARTs) and artificial neural networks) as well as Bayesian counterparts (Bayesian CART and Bayesian neural networks) to learning from data. In this paper, we present two hybrid models combining the Bayesian and frequentist versions of CART and neural networks, which we call the Bayesian neural tree (BNT) models. BNT models can simultaneously perform feature selection and prediction, are highly flexible, and generalise well in settings with limited training observations. We study the statistical consistency of the proposed approaches and derive the optimal value of a vital model parameter. The excellent performance of the newly proposed BNT models is shown using simulation studies. We also provide some illustrative examples using a wide variety of standard regression datasets from a public available machine learning repository to show the superiority of the proposed models in comparison to popularly used Bayesian CART and Bayesian neural network models.  相似文献   

6.
In many clinical research applications the time to occurrence of one event of interest, that may be obscured by another??so called competing??event, is investigated. Specific interventions can only have an effect on the endpoint they address or research questions might focus on risk factors for a certain outcome. Different approaches for the analysis of time-to-event data in the presence of competing risks were introduced in the last decades including some new methodologies, which are not yet frequently used in the analysis of competing risks data. Cause-specific hazard regression, subdistribution hazard regression, mixture models, vertical modelling and the analysis of time-to-event data based on pseudo-observations are described in this article and are applied to a dataset of a cohort study intended to establish risk stratification for cardiac death after myocardial infarction. Data analysts are encouraged to use the appropriate methods for their specific research questions by comparing different regression approaches in the competing risks setting regarding assumptions, methodology and interpretation of the results. Notes on application of the mentioned methods using the statistical software R are presented and extensions to the presented standard methods proposed in statistical literature are mentioned.  相似文献   

7.
Affiliation network is one kind of two-mode social network with two different sets of nodes (namely, a set of actors and a set of social events) and edges representing the affiliation of the actors with the social events. The connections in many affiliation networks are only binary weighted between actors and social events that can not reveal the affiliation strength relationship. Although a number of statistical models are proposed to analyze affiliation binary weighted networks, the asymptotic behaviors of the maximum likelihood estimator (MLE) are still unknown or have not been properly explored in affiliation weighted networks. In this paper, we study an affiliation model with the degree sequence as the exclusively natural sufficient statistic in the exponential family distributions. We derive the consistency and asymptotic normality of the maximum likelihood estimator in affiliation finite discrete weighted networks when the numbers of actors and events both go to infinity. Simulation studies and a real data example demonstrate our theoretical results.  相似文献   

8.
In this article, we present a Bernstein inequality for sums of random variables which are defined on a graphical network whose nodes grow at an exponential rate. The inequality can be used to derive concentration inequalities in highly connected networks. It can be useful to obtain consistency properties for non parametric estimators of conditional expectation functions which are derived from such networks.  相似文献   

9.
Social networking sites (SNSs) make it possible to connect people and they can communicate with others. Due to the lack of privacy mechanisms, the users in SNSs are vulnerable to some kinds of attacks. Security and privacy issues have become critically important with the fast expansion of SNSs. Most network applications such as pervasive computing, grid computing and P2P networks can be viewed as multi-agent systems which are open, anonymous and dynamic in nature. Moreover, most of the existing reputation trust models (RTMs) do not depend on any clustering structures. The clustering structures are used to effectively calculate the trustworthiness of the network nodes. In this paper, a novel cosine similarity-based clustering and dynamic reputation trust aware key generation (CSBC-DRT) scheme is proposed. For better faced clustering, a cosine similarity measure is estimated for all the nodes on the network. Based on the similarity measure among the nodes, the network nodes are clustered into disjoint groups. The RTM is built in this proposed scheme. Here, an improved MD5 algorithm is explored for key generation and key verification. After the key verification, the trusted measures such as reputation value, positive edge and negative edge values are computed to formulate the trusted network. The proposed scheme performs better than the existing RTM, which provides trusted communication in social networks.  相似文献   

10.
《Econometric Reviews》2013,32(4):385-424
This paper introduces nonlinear dynamic factor models for various applications related to risk analysis. Traditional factor models represent the dynamics of processes driven by movements of latent variables, called the factors. Our approach extends this setup by introducing factors defined as random dynamic parameters and stochastic autocorrelated simulators. This class of factor models can represent processes with time varying conditional mean, variance, skewness and excess kurtosis. Applications discussed in the paper include dynamic risk analysis, such as risk in price variations (models with stochastic mean and volatility), extreme risks (models with stochastic tails), risk on asset liquidity (stochastic volatility duration models), and moral hazard in insurance analysis.

We propose estimation procedures for models with the marginal density of the series and factor dynamics parameterized by distinct subsets of parameters. Such a partitioning of the parameter vector found in many applications allows to simplify considerably statistical inference. We develop a two- stage Maximum Likelihood method, called the Finite Memory Maximum Likelihood, which is easy to implement in the presence of multiple factors. We also discuss simulation based estimation, testing, prediction and filtering.  相似文献   

11.
An examination of the theoretical moments of a probability density function provides useful information about the flexibility of alternative distributions. Johnson and Kotz (1970), among others, consider skewness-kurtosis plots to illustratve the ability of various probability density functions to model diverse distributional characteristics. This article investigates the skewness-kurtosis plots of the exponential generalized beta of the first and second kind and some important special cases which have been used in various applications in economics, statistics, and finance.  相似文献   

12.
《Econometric Reviews》2012,31(1):1-26
Abstract

This paper proposes a nonparametric procedure for testing conditional quantile independence using projections. Relative to existing smoothed nonparametric tests, the resulting test statistic: (i) detects the high frequency local alternatives that converge to the null hypothesis in probability at faster rate and, (ii) yields improvements in the finite sample power when a large number of variables are included under the alternative. In addition, it allows the researcher to include qualitative information and, if desired, direct the test against specific subsets of alternatives without imposing any functional form on them. We use the weighted Nadaraya-Watson (WNW) estimator of the conditional quantile function avoiding the boundary problems in estimation and testing and prove weak uniform consistency (with rate) of the WNW estimator for absolutely regular processes. The procedure is applied to a study of risk spillovers among the banks. We show that the methodology generalizes some of the recently proposed measures of systemic risk and we use the quantile framework to assess the intensity of risk spillovers among individual financial institutions.  相似文献   

13.
Children exposed to mixtures of endocrine disrupting compounds such as phthalates are at high risk of experiencing significant friction in their growth and sexual maturation. This article is primarily motivated by a study that aims to assess the toxicants‐modified effects of risk factors related to the hazards of early or delayed onset of puberty among children living in Mexico City. To address the hypothesis of potential nonlinear modification of covariate effects, we propose a new Cox regression model with multiple functional covariate‐environment interactions, which allows covariate effects to be altered nonlinearly by mixtures of exposed toxicants. This new class of models is rather flexible and includes many existing semiparametric Cox models as special cases. To achieve efficient estimation, we develop the global partial likelihood method of inference, in which we establish key large‐sample results, including estimation consistency, asymptotic normality, semiparametric efficiency and the generalized likelihood ratio test for both parameters and nonparametric functions. The proposed methodology is examined via simulation studies and applied to the analysis of the motivating data, where maternal exposures to phthalates during the third trimester of pregnancy are found to be important risk modifiers for the age of attaining the first stage of puberty. The Canadian Journal of Statistics 47: 204–221; 2019 © 2019 Statistical Society of Canada  相似文献   

14.
Abstract

One of the most important factors in building and changing communication mechanisms in social networks is considering features of the members of social networks. Most of the existing methods in network monitoring don’t consider effects of features in network formation mechanisms and others don’t lead to reliable results when the features abound or when there are correlations among them. In this article, we combined two methods principal component analysis (PCA) and likelihood method to monitor the underlying network model when the features of individuals abound and when some of them have high correlations with each other.  相似文献   

15.
Summary: Responses to income questions in surveys are often rounded by the respondents. Though this is widely ignored, rounding can have detrimental effects on the results of a statistical analysis, especially with respect to the consistency of estimates. This paper deals with the analysis of data from the Finnish sub–sample of the European Community Household Panel (ECHP) with respect to factors that influence rounding of personal gross wage and earnings. The finding is that the propensity to observe rounded values can be related to factors like the interview mode, the wage level, and personal characteristics like gender and job type.*Work financed by the European Commission under contract number IST–1999–11101.  相似文献   

16.
Since the web-based registry ClinicalTrials.gov was launched on 29 February 2000, the pharmaceutical industry has made available an increasing amount of information about the clinical trials that it sponsors. The process has been spurred on by a number of factors including a wish by the industry to provide greater transparency regarding clinical trial data; and has been both aided and complicated by the number of institutions that have a legitimate interest in guiding and defining what should be made available. This article reviews the history of this process of making information about clinical trials publicly available. It provides a reader's guide to the study registries and the databases of results; and looks at some indicators of consistency in the posting of study information.  相似文献   

17.
AStA Advances in Statistical Analysis - Data-based methods and statistical models are given special attention to the study of sports injuries to gain in-depth understanding of its risk factors and...  相似文献   

18.
Lele has shown that the Procrustes estimator of form is inconsistent and raised the question about the consistency of the Procrustes estimator of shape. In this paper the consistency of estimators of form and shape is studied under various assumptions. In particular, it is shown that the Procrustes estimator of shape is consistent under the assumption of an isotropic error distribution and that consistency breaks down if the assumption of isotropy is relaxed. The relevance of these results for practical shape analysis is discussed. As a by-product, some new results are derived for the offset uniform distribution from directional data.  相似文献   

19.
ABSTRACT

In this article, we examine a novel way of imposing shape constraints on a local polynomial kernel estimator. The proposed approach is referred to as shape constrained kernel-weighted least squares (SCKLS). We prove uniform consistency of the SCKLS estimator with monotonicity and convexity/concavity constraints and establish its convergence rate. In addition, we propose a test to validate whether shape constraints are correctly specified. The competitiveness of SCKLS is shown in a comprehensive simulation study. Finally, we analyze Chilean manufacturing data using the SCKLS estimator and quantify production in the plastics and wood industries. The results show that exporting firms have significantly higher productivity.  相似文献   

20.
Consider a population of individuals who are free of a disease under study, and who are exposed simultaneously at random exposure levels, say X,Y,Z,… to several risk factors which are suspected to cause the disease in the populationm. At any specified levels X=x, Y=y, Z=z, …, the incidence rate of the disease in the population ot risk is given by the exposure–response relationship r(x,y,z,…) = P(disease|x,y,z,…). The present paper examines the relationship between the joint distribution of the exposure variables X,Y,Z, … in the population at risk and the joint distribution of the exposure variables U,V,W,… among cases under the linear and the exponential risk models. It is proven that under the exponential risk model, these two joint distributions belong to the same family of multivariate probability distributions, possibly with different parameters values. For example, if the exposure variables in the population at risk have jointly a multivariate normal distribution, so do the exposure variables among cases; if the former variables have jointly a multinomial distribution, so do the latter. More generally, it is demonstrated that if the joint distribution of the exposure variables in the population at risk belongs to the exponential family of multivariate probability distributions, so does the joint distribution of exposure variables among cases. If the epidemiologist can specify the differnce among the mean exposure levels in the case and control groups which are considered to be clinically or etiologically important in the study, the results of the present paper may be used to make sample size determinations for the case–control study, corresponding to specified protection levels, i.e., size α and 1–β of a statistical test. The multivariate normal, the multinomial, the negative multinomial and Fisher's multivariate logarithmic series exposure distributions are used to illustrate our results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号