首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   15篇
  免费   0篇
管理学   4篇
理论方法论   1篇
社会学   1篇
统计学   9篇
  2019年   1篇
  2017年   1篇
  2016年   2篇
  2014年   2篇
  2012年   1篇
  2011年   1篇
  2008年   2篇
  2007年   1篇
  2005年   2篇
  2004年   1篇
  2001年   1篇
排序方式: 共有15条查询结果,搜索用时 156 毫秒
1.
Typically, parametric approaches to spatial problems require restrictive assumptions. On the other hand, in a wide variety of practical situations nonparametric bivariate smoothing techniques has been shown to be successfully employable for estimating small or large scale regularity factors, or even the signal content of spatial data taken as a whole.We propose a weighted local polynomial regression smoother suitable for fitting of spatial data. To account for spatial variability, we both insert a spatial contiguity index in the standard formulation, and construct a spatial-adaptive bandwidth selection rule. Our bandwidth selector depends on the Gearys local indicator of spatial association. As illustrative example, we provide a brief Monte Carlo study case on equally spaced data, the performances of our smoother and the standard polynomial regression procedure are compared.This note, though it is the result of a close collaboration, was specifically elaborated as follows: paragraphs 1 and 2 by T. Sclocco and the remainder by M. Di Marzio. The authors are grateful to the referees for constructive comments and suggestions.  相似文献   
2.
This article proposes a systematic procedure for computing probabilities of operator action failure in the cognitive reliability and error analysis method (CREAM). The starting point for the quantification is a previously introduced fuzzy version of the CREAM paradigm that is here further extended to account for: (1) the ambiguity in the qualification of the conditions under which the action is performed (common performance conditions, CPCs) and (2) the fact that the effects of such conditions on human performance reliability may not all be equal.  相似文献   
3.
Kernel-based density estimation algorithms are inefficient in presence of discontinuities at support endpoints. This is substantially due to the fact that classic kernel density estimators lead to positive estimates beyond the endopoints. If a nonparametric estimate of a density functional is required in determining the bandwidth, then the problem also affects the bandwidth selection procedure. In this paper algorithms for bandwidth selection and kernel density estimation are proposed for non-negative random variables. Furthermore, the methods we propose are compared with some of the principal solutions in the literature through a simulation study.  相似文献   
4.
Statistical learning is emerging as a promising field where a number of algorithms from machine learning are interpreted as statistical methods and vice-versa. Due to good practical performance, boosting is one of the most studied machine learning techniques. We propose algorithms for multivariate density estimation and classification. They are generated by using the traditional kernel techniques as weak learners in boosting algorithms. Our algorithms take the form of multistep estimators, whose first step is a standard kernel method. Some strategies for bandwidth selection are also discussed with regard both to the standard kernel density classification problem, and to our 'boosted' kernel methods. Extensive experiments, using real and simulated data, show an encouraging practical relevance of the findings. Standard kernel methods are often outperformed by the first boosting iterations and in correspondence of several bandwidth values. In addition, the practical effectiveness of our classification algorithm is confirmed by a comparative study on two real datasets, the competitors being trees including AdaBoosting with trees.  相似文献   
5.
We present an experiment investigating the effects of singling out an individual on trust and trustworthiness. We find that (a) trustworthiness falls if there is a singled out subject; (b) non-singled out subjects discriminate against the singled out subject when they are not responsible of the distinct status of this person; (c) under a negative frame, the singled out subject returns significantly less; (d) under a positive frame, the singled out subject behaves bimodally, either selecting very low or very high return rates. Overall, singling out induces a negligible effect on trust but is potentially disruptive for trustworthiness.  相似文献   
6.
Kernel density classification and boosting: an L2 analysis   总被引:1,自引:0,他引:1  
Kernel density estimation is a commonly used approach to classification. However, most of the theoretical results for kernel methods apply to estimation per se and not necessarily to classification. In this paper we show that when estimating the difference between two densities, the optimal smoothing parameters are increasing functions of the sample size of the complementary group, and we provide a small simluation study which examines the relative performance of kernel density methods when the final goal is classification.A relative newcomer to the classification portfolio is boosting, and this paper proposes an algorithm for boosting kernel density classifiers. We note that boosting is closely linked to a previously proposed method of bias reduction in kernel density estimation and indicate how it will enjoy similar properties for classification. We show that boosting kernel classifiers reduces the bias whilst only slightly increasing the variance, with an overall reduction in error. Numerical examples and simulations are used to illustrate the findings, and we also suggest further areas of research.  相似文献   
7.
ABSTRACT

The conditional density offers the most informative summary of the relationship between explanatory and response variables. We need to estimate it in place of the simple conditional mean when its shape is not well-behaved. A motivation for estimating conditional densities, specific to the circular setting, lies in the fact that a natural alternative of it, like quantile regression, could be considered problematic because circular quantiles are not rotationally equivariant. We treat conditional density estimation as a local polynomial fitting problem as proposed by Fan et al. [Estimation of conditional densities and sensitivity measures in nonlinear dynamical systems. Biometrika. 1996;83:189–206] in the Euclidean setting, and discuss a class of estimators in the cases when the conditioning variable is either circular or linear. Asymptotic properties for some members of the proposed class are derived. The effectiveness of the methods for finite sample sizes is illustrated by simulation experiments and an example using real data.  相似文献   
8.
ABSTRACT

Local likelihood has been mainly developed from an asymptotic point of view, with little attention to finite sample size issues. The present paper provides simulation evidence of how likelihood density estimation practically performs from two points of view. First, we explore the impact of the normalization step of the final estimate, second we show the effectiveness of higher order fits in identifying modes present in the population when small sample sizes are available. We refer to circular data, nevertheless it is easily seen that our findings straightforwardly extend to the Euclidean setting, where they appear to be somehow new.  相似文献   
9.
This article analyzes the mechanisms and effects of innovative financial instruments that a central public administration (CPA) may adopt to minimize the flood risk in particularly exposed regions. The pattern we suggest assumes that in risky areas the CPA can issue two financial instruments, called project options and CAT‐bonds, producing a dynamic interaction among three types of agents: the CPA itself, the local public administrations, and private investors. We explore the possible scenarios of such interaction and the conditions under which the CPA's goal of maximal risk reduction is attained. This pattern is proposed for flood risk mitigation in the city of Florence, where the model dynamics are tested assuming parameters obtained from engineering studies.  相似文献   
10.
The concentration of high-frequency controls in a limited period of time (“crackdowns”) constitutes an important feature of many law-enforcement policies around the world. In this paper, we offer a comprehensive investigation on the relative efficiency and effectiveness of various crackdown policies using a lab-in-the-field experiment with real passengers of a public transport service. We introduce a novel game, the daily public transportation game, where subjects have to decide, over many periods, whether to buy or not a ticket knowing that there might be a control. Our results show that (a) concentrated crackdowns are less effective and efficient than random controls; (b) prolonged crackdowns reduce fare-dodging during the period of intense monitoring but induce a burst of fraud as soon as they are withdrawn; (c) pre-announced controls induce more fraud in the periods without control. Overall, we also observe that real fare-dodgers fraud more in the experiment than non-fare-dodgers.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号