首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   48篇
  免费   3篇
管理学   5篇
人口学   3篇
理论方法论   1篇
社会学   13篇
统计学   29篇
  2020年   1篇
  2019年   3篇
  2017年   2篇
  2016年   1篇
  2015年   1篇
  2014年   2篇
  2013年   3篇
  2012年   1篇
  2011年   3篇
  2010年   1篇
  2009年   2篇
  2008年   1篇
  2007年   2篇
  2006年   2篇
  2004年   1篇
  2003年   2篇
  2002年   1篇
  2001年   3篇
  2000年   2篇
  1999年   1篇
  1998年   2篇
  1997年   2篇
  1996年   2篇
  1995年   1篇
  1991年   1篇
  1989年   1篇
  1988年   1篇
  1987年   2篇
  1984年   1篇
  1982年   1篇
  1967年   1篇
  1966年   1篇
排序方式: 共有51条查询结果,搜索用时 15 毫秒
21.
Two strategies that can potentially improve Markov Chain Monte Carlo algorithms are to use derivative evaluations of the target density, and to suppress random walk behaviour in the chain. The use of one or both of these strategies has been investigated in a few specific applications, but neither is used routinely. We undertake a broader evaluation of these techniques, with a view to assessing their utility for routine use. In addition to comparing different algorithms, we also compare two different ways in which the algorithms can be applied to a multivariate target distribution. Specifically, the univariate version of an algorithm can be applied repeatedly to one-dimensional conditional distributions, or the multivariate version can be applied directly to the target distribution.  相似文献   
22.
Summary.  Realistic statistical modelling of observational data often suggests a statistical model which is not fully identified, owing to potential biases that are not under the control of study investigators. Bayesian inference can be implemented with such a model, ideally with the most precise prior knowledge that can be ascertained. However, as a consequence of the non-identifiability, inference cannot be made arbitrarily accurate by choosing the sample size to be sufficiently large. In turn, this has consequences for sample size determination. The paper presents a sample size criterion that is based on a quantification of how much Bayesian learning can arise in a given non-identified model. A global perspective is adopted, whereby choosing larger sample sizes for some studies necessarily implies that some other potentially worthwhile studies cannot be undertaken. This suggests that smaller sample sizes should be selected with non-identified models, as larger sample sizes constitute a squandering of resources in making estimator variances very small compared with their biases. Particularly, consider two investigators planning the same study, one of whom admits to the potential biases at hand and consequently uses a non-identified model, whereas the other pretends that there are no biases, leading to an identified but less realistic model. It is seen that the former investigator always selects a smaller sample size than the latter, with the difference being quite marked in some illustrative cases.  相似文献   
23.
In this article, I suggest and support a utilitarian approach to business ethics. Utilitarianism is already widely used as a business ethic approach, although it is not well developed in the literature. Utilitarianism provides a guiding framework of decision making rooted in social benefit which helps direct business toward more ethical behavior. It is the basis for much of our discussion regarding the failures of Enron, Worldcom, and even the subprime mess and Wall Street Meltdown. In short, the negative social consequences are constantly referred to as proof of the wrongness of these actions and events, and the positive social consequences of bailouts and other plans are used as ethical support for those plans to right the wrongs. I believe the main cause of the neglect of the utilitarian approach is because of misguided criticisms. Here, I defend utilitarianism as a basis for business ethics against many criticisms found in the business ethics literature, showing that a business ethics approach relying on John Stuart Mill's utilitarianism supports principles like justice, is not biased against the minority, and is more reasonable than other views such as a Kantian view when dealing with workers and making other decisions in business. I also explain utilitarian moral motivation and use satisficing theory to attempt to defend utilitarian business ethics from questions raised regarding utilitarian calculus.  相似文献   
24.
J Gong  RP McAfee 《Economic inquiry》2000,38(2):218-238
We model the civil dispute resolution process as a two-stage game with the parties bargaining to reach a settlement in the first stage and then playing a litigation expenditure game at trial in the second stage. We find that the English rule shifts the settlement away from the interim fair and unbiased settlement in most circumstances. Overall welfare changes are in favor of the party who makes the offer in the pretrial negotiation stage. Lawyers however, always benefit from the English rule, because fee shifting increases the stake of the trial and thus intensifies the use of the legal service.  相似文献   
25.
The authors examine several aspects of cross‐validation for Bayesian models. In particular, they propose a computational scheme which does not require a separate posterior sample for each training sample.  相似文献   
26.
27.
Influence functions are considered as diagnostics for model departures in parametric Bayesian inference. A baseline model density is expressed as a mixture; then the mixing distribution is perturbed. This is designed to engender perturbations which are plausible a priori. The influence of perturbations is measured for both Bayes estimates and their associated posterior expected losses. To assess the plausibility of perturbations a posteriori, an additional influence function is constructed for the Bayes factor comparing the perturbed and baseline models. The effect of perturbation on various estimands is incorporated in the analysis.  相似文献   
28.
Who will lead?     
A recent survey conducted by the UCLA Center for Health Services Management and the Physician Executive Practice of Heidrick & Struggles, an executive search firm, sheds light on the emerging physician executive's role. The goal of the research was to identify success factors as a means of evaluating and developing effective industry leaders. Respondents were asked to look at specific skills in relation to nine categories: Communication, leadership, interpersonal skills, self-motivation/management, organizational knowledge, organizational strategy, administrative skills, and thinking. Communication, leadership, and self-motivation/management emerged, in that order, as the three most important success factors for physician executives. An individual's general competencies, work styles, and ability to lead others through organizational restructuring defines his or her appropriateness for managerial positions in the health care industry.  相似文献   
29.
Model misspecification and noisy covariate measurements are two common sources of inference bias. There is considerable literature on the consequences of each problem in isolation. In this paper, however, the author investigates their combined effects. He shows that in the context of linear models, the large‐sample error in estimating the regression function may be partitioned in two terms quantifying the impact of these sources of bias. This decomposition reveals trade‐offs between the two biases in question in a number of scenarios. After presenting a finite‐sample version of the decomposition, the author studies the relative impacts of model misspecification, covariate imprecision, and sampling variability, with reference to the detectability of the model misspecification via diagnostic plots.  相似文献   
30.
ABSTRACT

In response to growing concern about the reliability and reproducibility of published science, researchers have proposed adopting measures of “greater statistical stringency,” including suggestions to require larger sample sizes and to lower the highly criticized “p?<?0.05” significance threshold. While pros and cons are vigorously debated, there has been little to no modeling of how adopting these measures might affect what type of science is published. In this article, we develop a novel optimality model that, given current incentives to publish, predicts a researcher’s most rational use of resources in terms of the number of studies to undertake, the statistical power to devote to each study, and the desirable prestudy odds to pursue. We then develop a methodology that allows one to estimate the reliability of published research by considering a distribution of preferred research strategies. Using this approach, we investigate the merits of adopting measures of “greater statistical stringency” with the goal of informing the ongoing debate.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号