首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   33篇
  免费   0篇
管理学   9篇
人口学   1篇
理论方法论   2篇
社会学   6篇
统计学   15篇
  2021年   1篇
  2020年   1篇
  2018年   1篇
  2017年   1篇
  2015年   2篇
  2014年   5篇
  2013年   4篇
  2012年   1篇
  2010年   2篇
  2009年   5篇
  2008年   1篇
  2007年   4篇
  2006年   1篇
  1999年   1篇
  1998年   1篇
  1983年   1篇
  1971年   1篇
排序方式: 共有33条查询结果,搜索用时 15 毫秒
1.
Damage models for natural hazards are used for decision making on reducing and transferring risk. The damage estimates from these models depend on many variables and their complex sometimes nonlinear relationships with the damage. In recent years, data‐driven modeling techniques have been used to capture those relationships. The available data to build such models are often limited. Therefore, in practice it is usually necessary to transfer models to a different context. In this article, we show that this implies the samples used to build the model are often not fully representative for the situation where they need to be applied on, which leads to a “sample selection bias.” In this article, we enhance data‐driven damage models by applying methods, not previously applied to damage modeling, to correct for this bias before the machine learning (ML) models are trained. We demonstrate this with case studies on flooding in Europe, and typhoon wind damage in the Philippines. Two sample selection bias correction methods from the ML literature are applied and one of these methods is also adjusted to our problem. These three methods are combined with stochastic generation of synthetic damage data. We demonstrate that for both case studies, the sample selection bias correction techniques reduce model errors, especially for the mean bias error this reduction can be larger than 30%. The novel combination with stochastic data generation seems to enhance these techniques. This shows that sample selection bias correction methods are beneficial for damage model transfer.  相似文献   
2.
3.
Lack of information about technology and prices often hampers the empirical assessment of the profit maximization hypothesis (viz. by measuring the degree of profit efficiency). The non-parametric Data Envelopment Analysis (DEA) methodology can deal with such incomplete information. We exploit the implicit but largely neglected profit interpretation of the DEA model that builds on assumptions of monotone and convex production possibility sets. We show how its embedded assessment of necessary conditions for profit maximization can be strengthened given partial information in the form of monetary sub-cost/-revenue data (that are often easier obtained than the pure quantity data). Finally, we argue that a ‘mix’ efficiency analysis is naturally complementary to such a profit efficiency analysis. An application to German farm types complements our methodological discussion. By using non-parametric statistical tests, we further demonstrate the potential of the non-parametric approach in deriving strong and robust statistical evidence while imposing minimal structure on the setting under study. In particular, we look for significant efficiency variation over regions.  相似文献   
4.
5.
A supply chain may operate under either preorder mode, consignment mode or the combination of these two modes. Under preorder, the retailer procures before the sale and takes full inventory risk during the sale, while under consignment, the retailer sells the product for the supplier with the supplier taking the inventory risk. The combination mode shares the risk in the supply chain. The existing research has examined the supply chain modes from various operational aspects. However, the impact of financial constraint is neglected. This paper examines the impact of financial constraint and investigates the supply chain efficiency under each mode. Based on a Stackelberg game with the supplier being the leader, we show that without financial constraint the supplier always prefers the consignment mode, taking full inventory risk. Whereas, in the presence of financial constraint, the supplier will sell part of the inventory to the retailer through preorder, which shares the inventory risk in the supply chain. We show that with financial constraint, the combination mode is the most efficient mode even if the retailer earns zero internal capital.  相似文献   
6.
An Introduction to ‘Benefit of the Doubt’ Composite Indicators   总被引:2,自引:0,他引:2  
Despite their increasing use, composite indicators remain controversial. The undesirable dependence of countries’ rankings on the preliminary normalization stage, and the disagreement among experts/stakeholders on the specific weighting scheme used to aggregate sub-indicators, are often invoked to undermine the credibility of composite indicators. Data envelopment analysis may be instrumental in overcoming these limitations. One part of its appeal in the composite indicator context stems from its invariance to measurement units, which entails that a normalization stage can be skipped. Secondly, it fills the informational gap in the ‘right’ set of weights by generating flexible ‘benefit of the doubt’-weights for each evaluated country. The ease of interpretation is a third advantage of the specific model that is the main focus of this paper. In sum, the method may help to neutralize some recurring sources of criticism on composite indicators, allowing one to shift the focus to other, and perhaps more essential stages of their construction. An abridged version of this paper was presented at the Workshop on European Indicators and Scoreboards, organised by DG Education and the Joint Research Centre within the auspices of CRELL, in Brussels, October 24–25, 2005.  相似文献   
7.
We consider cross-sectional aggregation of time series with long-range dependence. This question arises for instance from the statistical analysis of networks where aggregation is defined via routing matrices. Asymptotically, aggregation turns out to increase dependence substantially, transforming a hyperbolic decay of autocorrelations to a slowly varying rate. This effect has direct consequences for statistical inference. For instance, unusually slow rates of convergence for nonparametric trend estimators and nonstandard formulas for optimal bandwidths are obtained. The situation changes, when time-dependent aggregation is applied. Suitably chosen time-dependent aggregation schemes can preserve a hyperbolic rate or even eliminate autocorrelations completely.  相似文献   
8.
We consider parameter estimation for time-dependent locally stationary long-memory processes. The asymptotic distribution of an estimator based on the local infinite autoregressive representation is derived, and asymptotic formulas for the mean squared error of the estimator, and the asymptotically optimal bandwidth are obtained. In spite of long memory, the optimal bandwidth turns out to be of the order n-1/5n-1/5 and inversely proportional to the square of the second derivative of d. In this sense, local estimation of d is comparable to regression smoothing with iid residuals.  相似文献   
9.
We provide a nonparametric characterization of a general collective model for household consumption, which includes externalities and public consumption. Next, we establish testable necessary and sufficient conditions for data consistency with collective rationality that only include observed price and quantity information. These conditions have a similar structure as the generalized axiom of revealed preference for the unitary model, which is convenient from a testing point of view. In addition, we derive the minimum number of goods and observations that enable the rejection of collectively rational household behavior.  相似文献   
10.
We propose a method to set identify bounds on the sharing rule for a general collective household consumption model. Unlike the effects of distribution factors, the level of the sharing rule cannot be uniquely identified without strong assumptions on preferences across households. Our new results show that, though not point identified without these assumptions, strong bounds on the sharing rule can be obtained. We get these bounds by applying revealed preference restrictions implied by the collective model to the household's continuous aggregate demand functions. We obtain informative bounds even if nothing is known about whether each good is public, private, or assignable within the household, though having such information tightens the bounds. We apply our method to US PSID data, obtaining narrow bounds that yield useful conclusions regarding the effects of income and wages on intrahousehold resource sharing, and on the prevalence of individual (as opposed to household level) poverty.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号