首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11篇
  免费   2篇
人口学   1篇
统计学   12篇
  2021年   2篇
  2018年   1篇
  2015年   2篇
  2014年   2篇
  2011年   1篇
  2007年   1篇
  1992年   1篇
  1991年   3篇
排序方式: 共有13条查询结果,搜索用时 250 毫秒
1.
This paper studies a robust approach to the analysis of cell pedigree data, building on the work of Huggins & Marschner (1991) which discussed M-estimation for the so-called bifurcating autoregressive process. The study allows for incomplete observation of the pedigree, and incorporates the possibility of additive effects outliers, as discussed in the time series literature. Some properties of the proposed estimation procedure are studied, including a Monte Carlo investigation of robustness in the presence of contamination.  相似文献   
2.
Food and nutrition insecurity remains a challenge in sub-Saharan Africa. Several studies have examined food and nutrition insecurity in urban or rural areas but have not captured the whole continuum. Between November and December 2013, 240 households were surveyed along the urban–rural continuum in Northern Ghana. The study objective was to understand the socio-spatial dynamics of household food and nutrition insecurity and to investigate the role played by urban, peri-urban and rural agriculture. The study found that there was more involvement in agriculture in rural areas compared to peri-urban areas and urban areas. Households from urban areas were more food insecure (HFIAS >?11) compared to their counterparts in peri-urban and the rural areas. Stunting increased by 3.4 times (p?=?0.048) among households located in the peri-urban area. Wasting was reduced by 0.16 times among household that produced staple food or vegetables (p?=?0.011). Overweight was reduced by 0.04 times among households that produced livestock (p?=?0.031). The results reveal a socio-spatial dimension of food and nutrition insecurity that is related to agricultural activities.  相似文献   
3.
Multi‐country randomised clinical trials (MRCTs) are common in the medical literature, and their interpretation has been the subject of extensive recent discussion. In many MRCTs, an evaluation of treatment effect homogeneity across countries or regions is conducted. Subgroup analysis principles require a significant test of interaction in order to claim heterogeneity of treatment effect across subgroups, such as countries in an MRCT. As clinical trials are typically underpowered for tests of interaction, overly optimistic expectations of treatment effect homogeneity can lead researchers, regulators and other stakeholders to over‐interpret apparent differences between subgroups even when heterogeneity tests are insignificant. In this paper, we consider some exploratory analysis tools to address this issue. We present three measures derived using the theory of order statistics, which can be used to understand the magnitude and the nature of the variation in treatment effects that can arise merely as an artefact of chance. These measures are not intended to replace a formal test of interaction but instead provide non‐inferential visual aids, which allow comparison of the observed and expected differences between regions or other subgroups and are a useful supplement to a formal test of interaction. We discuss how our methodology differs from recently published methods addressing the same issue. A case study of our approach is presented using data from the Study of Platelet Inhibition and Patient Outcomes (PLATO), which was a large cardiovascular MRCT that has been the subject of controversy in the literature. An R package is available that implements the proposed methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
4.
Relative risks are often considered preferable to odds ratios for quantifying the association between a predictor and a binary outcome. Relative risk regression is an alternative to logistic regression where the parameters are relative risks rather than odds ratios. It uses a log link binomial generalised linear model, or log‐binomial model, which requires parameter constraints to prevent probabilities from exceeding 1. This leads to numerical problems with standard approaches for finding the maximum likelihood estimate (MLE), such as Fisher scoring, and has motivated various non‐MLE approaches. In this paper we discuss the roles of the MLE and its main competitors for relative risk regression. It is argued that reliable alternatives to Fisher scoring mean that numerical issues are no longer a motivation for non‐MLE methods. Nonetheless, non‐MLE methods may be worthwhile for other reasons and we evaluate this possibility for alternatives within a class of quasi‐likelihood methods. The MLE obtained using a reliable computational method is recommended, but this approach requires bootstrapping when estimates are on the parameter space boundary. If convenience is paramount, then quasi‐likelihood estimation can be a good alternative, although parameter constraints may be violated. Sensitivity to model misspecification and outliers is also discussed along with recommendations and priorities for future research.  相似文献   
5.
Clinical trials are often designed to compare several treatments with a common control arm in pairwise fashion. In this paper we study optimal designs for such studies, based on minimizing the total number of patients required to achieve a given level of power. A common approach when designing studies to compare several treatments with a control is to achieve the desired power for each individual pairwise treatment comparison. However, it is often more appropriate to characterize power in terms of the family of null hypotheses being tested, and to control the probability of rejecting all, or alternatively any, of these individual hypotheses. While all approaches lead to unbalanced designs with more patients allocated to the control arm, it is found that the optimal design and required number of patients can vary substantially depending on the chosen characterization of power. The methods make allowance for both continuous and binary outcomes and are illustrated with reference to two clinical trials, one involving multiple doses compared to placebo and the other involving combination therapy compared to mono-therapies. In one example a 55% reduction in sample size is achieved through an optimal design combined with the appropriate characterization of power.  相似文献   
6.
7.
Most clinical studies, which investigate the impact of therapy simultaneously, record the frequency of adverse events in order to monitor safety of the intervention. Study reports typically summarise adverse event data by tabulating the frequencies of the worst grade experienced but provide no details of the temporal profiles of specific types of adverse events. Such 'toxicity profiles' are potentially important tools in disease management and in the assessment of newer therapies including targeted treatments and immunotherapy where different types of toxicity may be more common at various times during long-term drug exposure. Toxicity profiles of commonly experienced adverse events occurring due to exposure to long-term treatment could assist in evaluating the costs of the health care benefits of therapy. We show how to generate toxicity profiles using an adaptation of the ordinal time-to-event model comprising of a two-step process, involving estimation of the multinomial response probabilities using multinomial logistic regression and combining these with recurrent time to event hazard estimates to produce cumulative event probabilities for each of the multinomial adverse event response categories. Such a model permits the simultaneous assessment of the risk of events over time and provides cumulative risk probabilities for each type of adverse event response. The method can be applied more generally by using different models to estimate outcome/response probabilities. The method is illustrated by developing toxicity profiles for three distinct types of adverse events associated with two treatment regimens for patients with advanced breast cancer.  相似文献   
8.
When the infection rate associated with an epidemic appears to decline over time, one explanation is a constant level of infectiousness combined with heterogeneity among the susceptible population. In this paper we consider random effects models for such heterogeneity, particularly in discrete time. Maximum likelihood techniques are discussed as well as a more convenient approach based on martingale estimating equations. An application to data on a smallpox outbreak is considered.  相似文献   
9.
A robust approach to the analysis of epidemic data is suggested. This method is based on a natural extension of M-estimation for i.i.d. observations where the distribution may be asymmetric. It is discussed initially in the context of a general discrete time stochastic process before being applied to previously studied epidemic models. In particular we consider a class of chain binomial models and models based on time dependent branching processes. Robustness and efficiency properties are studied through simulation and some previously analysed data sets are considered.  相似文献   
10.
A model to accommodate time-to-event ordinal outcomes was proposed by Berridge and Whitehead. Very few studies have adopted this approach, despite its appeal in incorporating several ordered categories of event outcome. More recently, there has been increased interest in utilizing recurrent events to analyze practical endpoints in the study of disease history and to help quantify the changing pattern of disease over time. For example, in studies of heart failure, the analysis of a single fatal event no longer provides sufficient clinical information to manage the disease. Similarly, the grade/frequency/severity of adverse events may be more important than simply prolonged survival in studies of toxic therapies in oncology. We propose an extension of the ordinal time-to-event model to allow for multiple/recurrent events in the case of marginal models (where all subjects are at risk for each recurrence, irrespective of whether they have experienced previous recurrences) and conditional models (subjects are at risk of a recurrence only if they have experienced a previous recurrence). These models rely on marginal and conditional estimates of the instantaneous baseline hazard and provide estimates of the probabilities of an event of each severity for each recurrence over time. We outline how confidence intervals for these probabilities can be constructed and illustrate how to fit these models and provide examples of the methods, together with an interpretation of the results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号