首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   18篇
  免费   1篇
理论方法论   1篇
社会学   3篇
统计学   15篇
  2023年   1篇
  2018年   1篇
  2014年   1篇
  2013年   1篇
  2011年   1篇
  2009年   1篇
  2008年   1篇
  2006年   1篇
  2005年   3篇
  2004年   2篇
  2003年   2篇
  2001年   1篇
  2000年   1篇
  1999年   1篇
  1997年   1篇
排序方式: 共有19条查询结果,搜索用时 15 毫秒
1.
Quantifying uncertainty in the biospheric carbon flux for England and Wales   总被引:1,自引:0,他引:1  
Summary.  A crucial issue in the current global warming debate is the effect of vegetation and soils on carbon dioxide (CO2) concentrations in the atmosphere. Vegetation can extract CO2 through photosynthesis, but respiration, decay of soil organic matter and disturbance effects such as fire return it to the atmosphere. The balance of these processes is the net carbon flux. To estimate the biospheric carbon flux for England and Wales, we address the statistical problem of inference for the sum of multiple outputs from a complex deterministic computer code whose input parameters are uncertain. The code is a process model which simulates the carbon dynamics of vegetation and soils, including the amount of carbon that is stored as a result of photosynthesis and the amount that is returned to the atmosphere through respiration. The aggregation of outputs corresponding to multiple sites and types of vegetation in a region gives an estimate of the total carbon flux for that region over a period of time. Expert prior opinions are elicited for marginal uncertainty about the relevant input parameters and for correlations of inputs between sites. A Gaussian process model is used to build emulators of the multiple code outputs and Bayesian uncertainty analysis is then used to propagate uncertainty in the input parameters through to uncertainty on the aggregated output. Numerical results are presented for England and Wales in the year 2000. It is estimated that vegetation and soils in England and Wales constituted a net sink of 7.55 Mt C (1 Mt C = 1012 g of carbon) in 2000, with standard deviation 0.56 Mt C resulting from the sources of uncertainty that are considered.  相似文献   
2.
We present a case study based on a depression study that will illustrate the use of Bayesian statistics in the economic evaluation of cost‐effectiveness data, demonstrate the benefits of the Bayesian approach (whilst honestly recognizing any deficiencies) with respect to frequentist methods, and provide details of using the methods, including computer code where appropriate. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   
3.
In organ transplantation, placebo-controlled clinical trials are not possible for ethical reasons, and hence non-inferiority trials are used to evaluate new drugs. Patients with a transplanted kidney typically receive three to four immunosuppressant drugs to prevent organ rejection. In the described case of a non-inferiority trial for one of these immunosuppressants, the dose is changed, and another is replaced by an investigational drug. This test regimen is compared with the active control regimen. Justification for the non-inferiority margin is challenging as the putative placebo has never been studied in a clinical trial. We propose the use of a random-effect meta-regression, where each immunosuppressant component of the regimen enters as a covariate. This allows us to make inference on the difference between the putative placebo and the active control. From this, various methods can then be used to derive the non-inferiority margin. A hybrid of the 95/95 and synthesis approach is suggested. Data from 51 trials with a total of 17,002 patients were used in the meta-regression. Our approach was motivated by a recent large confirmatory trial in kidney transplantation. The results and the methodological documents of this evaluation were submitted to the Food and Drug Administration. The Food and Drug Administration accepted our proposed non-inferiority margin and our rationale.  相似文献   
4.
Eliciting expert knowledge about several uncertain quantities is a complex task when those quantities exhibit associations. A well-known example of such a problem is eliciting knowledge about a set of uncertain proportions which must sum to 1. The usual approach is to assume that the expert's knowledge can be adequately represented by a Dirichlet distribution, since this is by far the simplest multivariate distribution that is appropriate for such a set of proportions. It is also the most convenient, particularly when the expert's prior knowledge is to be combined with a multinomial sample since then the Dirichlet is the conjugate prior family. Several methods have been described in the literature for eliciting beliefs in the form of a Dirichlet distribution, which typically involve eliciting from the expert enough judgements to identify uniquely the Dirichlet hyperparameters. We describe here a new method which employs the device of over-fitting, i.e. eliciting more than the minimal number of judgements, in order to (a) produce a more carefully considered Dirichlet distribution and (b) ensure that the Dirichlet distribution is indeed a reasonable fit to the expert's knowledge. The method has been implemented in a software extension of the Sheffield elicitation framework (SHELF) to facilitate the multivariate elicitation process.  相似文献   
5.
Elicitation     
There are various situations in which it may be important to obtain expert opinion about some unknown quantity or quantities. But it is not enough simply to ask the expert for an estimate of the unknown quantity: we also need to know how far from that estimate the true value might be. Tony O'Hagan describes the process of elicitation: the formulation of the expert's knowledge in the form of a probability distribution.  相似文献   
6.
Conventional clinical trial design involves considerations of power, and sample size is typically chosen to achieve a desired power conditional on a specified treatment effect. In practice, there is considerable uncertainty about what the true underlying treatment effect may be, and so power does not give a good indication of the probability that the trial will demonstrate a positive outcome. Assurance is the unconditional probability that the trial will yield a ‘positive outcome’. A positive outcome usually means a statistically significant result, according to some standard frequentist significance test. The assurance is then the prior expectation of the power, averaged over the prior distribution for the unknown true treatment effect. We argue that assurance is an important measure of the practical utility of a proposed trial, and indeed that it will often be appropriate to choose the size of the sample (and perhaps other aspects of the design) to achieve a desired assurance, rather than to achieve a desired power conditional on an assumed treatment effect. We extend the theory of assurance to two‐sided testing and equivalence trials. We also show that assurance is straightforward to compute in some simple problems of normal, binary and gamma distributed data, and that the method is not restricted to simple conjugate prior distributions for parameters. Several illustrations are given. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   
7.
This article is concerned with the complex inequality experienced by mothers in employment, and applies ‘strong intersectionality’ to women's narratives about time to reveal the intersecting inequalities women experience and gendered organizational practices. Drawing on empirical research with 30 Irish ‘working mothers’, this article explores the way time is ordered and managed to create gendered inequalities for women at the intersection of maternity with paid work. By conceptualizing gender, maternity and class as simultaneous processes of identity practice, institutional practice and social practice, following Holvino, women's narratives reveal that organizations manage and order time to fit with notions of ‘ideal workers’, which perpetrate older hierarchies and gendered inequalities, and which create regimes of inequality for women at the intersection of maternity with paid work.  相似文献   
8.
Tony O'Hagan 《Significance》2004,1(3):132-133
There are many things that I am uncertain about, says Tony O'Hagan. Some are merely unknown to me, while others are unknowable. This article is about different kinds of uncertainty, and how the distinction between them impinges on the foundations of Probability and Statistics.  相似文献   
9.
The development of a new drug is a major undertaking and it is important to consider carefully the key decisions in the development process. Decisions are made in the presence of uncertainty and outcomes such as the probability of successful drug registration depend on the clinical development programmme. The Rheumatoid Arthritis Drug Development Model was developed to support key decisions for drugs in development for the treatment of rheumatoid arthritis. It is configured to simulate Phase 2b and 3 trials based on the efficacy of new drugs at the end of Phase 2a, evidence about the efficacy of existing treatments, and expert opinion regarding key safety criteria. The model evaluates the performance of different development programmes with respect to the duration of disease of the target population, Phase 2b and 3 sample sizes, the dose(s) of the experimental treatment, the choice of comparator, the duration of the Phase 2b clinical trial, the primary efficacy outcome and decision criteria for successfully passing Phases 2b and 3. It uses Bayesian clinical trial simulation to calculate the probability of successful drug registration based on the uncertainty about parameters of interest, thereby providing a more realistic assessment of the likely outcomes of individual trials and sequences of trials for the purpose of decision making. In this case study, the results show that, depending on the trial design, the new treatment has assurances of successful drug registration in the range 0.044–0.142 for an ACR20 outcome and 0.057–0.213 for an ACR50 outcome. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
10.
We describe a Bayesian model for a scenario in which the population of errors contains many 0s and there is a known covariate. This kind of structure typically occurs in auditing, and we use auditing as the driving application of the method. Our model is based on a categorization of the error population together with a Bayesian nonparametric method of modelling errors within some of the categories. Inference is through simulation. We conclude with an example based on a data set provided by the UK's National Audit Office.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号