首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   21篇
  免费   0篇
管理学   8篇
理论方法论   1篇
社会学   5篇
统计学   7篇
  2020年   3篇
  2017年   1篇
  2014年   1篇
  2013年   3篇
  2010年   2篇
  2007年   1篇
  2005年   1篇
  2001年   1篇
  2000年   1篇
  1999年   2篇
  1998年   2篇
  1996年   1篇
  1994年   1篇
  1992年   1篇
排序方式: 共有21条查询结果,搜索用时 31 毫秒
1.
Urban Ecosystems - Urban agriculture (UA) is regarded as an emerging tool and strategy for sustainable urban development as it addresses a wide array of environmental, economic and social...  相似文献   
2.
Summary.  The method of Bayesian model selection for join point regression models is developed. Given a set of K +1 join point models M 0,  M 1, …,  M K with 0, 1, …,  K join points respec-tively, the posterior distributions of the parameters and competing models M k are computed by Markov chain Monte Carlo simulations. The Bayes information criterion BIC is used to select the model M k with the smallest value of BIC as the best model. Another approach based on the Bayes factor selects the model M k with the largest posterior probability as the best model when the prior distribution of M k is discrete uniform. Both methods are applied to analyse the observed US cancer incidence rates for some selected cancer sites. The graphs of the join point models fitted to the data are produced by using the methods proposed and compared with the method of Kim and co-workers that is based on a series of permutation tests. The analyses show that the Bayes factor is sensitive to the prior specification of the variance σ 2, and that the model which is selected by BIC fits the data as well as the model that is selected by the permutation test and has the advantage of producing the posterior distribution for the join points. The Bayesian join point model and model selection method that are presented here will be integrated in the National Cancer Institute's join point software ( http://www.srab.cancer.gov/joinpoint/ ) and will be available to the public.  相似文献   
3.
Effective production scheduling requires consideration of the dynamics and unpredictability of the manufacturing environment. An automated learning scheme, utilizing genetic search, is proposed for adaptive control in typical decentralized factory-floor decision making. A high-level knowledge representation for modeling production environments is developed, with facilities for genetic learning within this scheme. A multiagent framework is used, with individual agents being responsible for the dispatch decision making at different workstations. Learning is with respect to stated objectives, and given the diversity of scheduling goals, the efficacy of the designed learning scheme is judged through its response under different objectives. The behavior of the genetic learning scheme is analyzed and simulation studies help compare how learning under different objectives impacts certain aggregate measures of system performance.  相似文献   
4.
Abstract

A key question for understanding the cross-section of expected returns of equities is the following: which factors, from a given collection of factors, are risk factors, equivalently, which factors are in the stochastic discount factor (SDF)? Though the SDF is unobserved, assumptions about which factors (from the available set of factors) are in the SDF restricts the joint distribution of factors in specific ways, as a consequence of the economic theory of asset pricing. A different starting collection of factors that go into the SDF leads to a different set of restrictions on the joint distribution of factors. The conditional distribution of equity returns has the same restricted form, regardless of what is assumed about the factors in the SDF, as long as the factors are traded, and hence the distribution of asset returns is irrelevant for isolating the risk-factors. The restricted factors models are distinct (nonnested) and do not arise by omitting or including a variable from a full model, thus precluding analysis by standard statistical variable selection methods, such as those based on the lasso and its variants. Instead, we develop what we call a Bayesian model scan strategy in which each factor is allowed to enter or not enter the SDF and the resulting restricted models (of which there are 114,674 in our empirical study) are simultaneously confronted with the data. We use a Student-t distribution for the factors, and model-specific independent Student-t distribution for the location parameters, a training sample to fix prior locations, and a creative way to arrive at the joint distribution of several other model-specific parameters from a single prior distribution. This allows our method to be essentially a scaleable and tuned-black-box method that can be applied across our large model space with little to no user-intervention. The model marginal likelihoods, and implied posterior model probabilities, are compared with the prior probability of 1/114,674 of each model to find the best-supported model, and thus the factors most likely to be in the SDF. We provide detailed simulation evidence about the high finite-sample accuracy of the method. Our empirical study with 13 leading factors reveals that the highest marginal likelihood model is a Student-t distributed factor model with 5 degrees of freedom and 8 risk factors.  相似文献   
5.
Inventions – concepts, devices, procedures – are often created by networks of interacting agents in which the agents can be individuals (as in a scientific discipline) or they can themselves be collectives (as in firms interacting in a market). Different collectives create and invent at different rates. It is plausible that the rate of invention is jointly determined by properties of the agents (e.g., their cognitive capacity) and by properties of the network of interactions (e.g., the density of the communication links), but little is known about such two-level interactions. We present an agent-based model of social creativity in which the individual agent captures key features of the human cognitive architecture derived from cognitive psychology, and the interactions are modeled by agents exchanging partial results of their symbolic processing of task information. We investigated the effect of agent and network properties on rates of invention and diffusion in the network via systematic parameter variations. Simulation runs show, among other results, that (a) the simulation exhibits network effects, i.e., the model captures the beneficial effect of collaboration; (b) the density of connections produces diminishing returns in term of the benefits on the invention rate; and (c) limits on the cognitive capacity of the individual agents have the counterintuitive consequence of focusing their efforts. Limitations and relations to other computer simulation models of creative collectives are discussed.  相似文献   
6.
Markov chain Monte Carlo (MCMC) algorithms have revolutionized Bayesian practice. In their simplest form (i.e., when parameters are updated one at a time) they are, however, often slow to converge when applied to high-dimensional statistical models. A remedy for this problem is to block the parameters into groups, which are then updated simultaneously using either a Gibbs or Metropolis-Hastings step. In this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in non-Gaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three real-data examples.  相似文献   
7.
Nonparametric and parametric estimators are combined to minimize the mean squared error among their linear combinations. The combined estimator is consistent and for large sample sizes has a smaller mean squared error than the nonparametric estimator when the parametric assumption is violated. If the parametric assumption holds, the combined estimator has a smaller MSE than the parametric estimator. Our simulation examples focus on mean estimation when data may follow a lognormal distribution, or can be a mixture with an exponential or a uniform distribution. Motivating examples illustrate possible application areas.  相似文献   
8.
9.
This paper provides a comparative study of machine learning techniques for two-group discrimination. Simulated data is used to examine how the different learning techniques perform with respect to certain data distribution characteristics. Both linear and nonlinear discrimination methods are considered. The data has been previously used in the comparative evaluation of a number of techniques and helps relate our findings across a range of discrimination techniques.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号