首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Summary.  The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.  相似文献   

2.
Acute respiratory diseases are transmitted over networks of social contacts. Large-scale simulation models are used to predict epidemic dynamics and evaluate the impact of various interventions, but the contact behavior in these models is based on simplistic and strong assumptions which are not informed by survey data. These assumptions are also used for estimating transmission measures such as the basic reproductive number and secondary attack rates. Development of methodology to infer contact networks from survey data could improve these models and estimation methods. We contribute to this area by developing a model of within-household social contacts and using it to analyze the Belgian POLYMOD data set, which contains detailed diaries of social contacts in a 24-hour period. We model dependency in contact behavior through a latent variable indicating which household members are at home. We estimate age-specific probabilities of being at home and age-specific probabilities of contact conditional on two members being at home. Our results differ from the standard random mixing assumption. In addition, we find that the probability that all members contact each other on a given day is fairly low: 0.49 for households with two 0-5 year olds and two 19-35 year olds, and 0.36 for households with two 12-18 year olds and two 36+ year olds. We find higher contact rates in households with 2-3 members, helping explain the higher influenza secondary attack rates found in households of this size.  相似文献   

3.
Abstract. A stochastic epidemic model is defined in which each individual belongs to a household, a secondary grouping (typically school or workplace) and also the community as a whole. Moreover, infectious contacts take place in these three settings according to potentially different rates. For this model, we consider how different kinds of data can be used to estimate the infection rate parameters with a view to understanding what can and cannot be inferred. Among other things we find that temporal data can be of considerable inferential benefit compared with final size data, that the degree of heterogeneity in the data can have a considerable effect on inference for non‐household transmission, and that inferences can be materially different from those obtained from a model with only two levels of mixing. We illustrate our findings by analysing a highly detailed dataset concerning a measles outbreak in Hagelloch, Germany.  相似文献   

4.
A class of individual-level models (ILMs) outlined by R. Deardon et al., [Inference for individual level models of infectious diseases in large populations, Statist. Sin. 20 (2010), pp. 239–261] can be used to model the spread of infectious diseases in discrete time. The key feature of these ILMs is that they take into account covariate information on susceptible and infectious individuals as well as shared covariate information such as geography or contact measures. Here, such ILMs are fitted in a Bayesian framework using Markov chain Monte Carlo techniques to data sets from two studies on influenza transmission within households in Hong Kong during 2008 to 2009 and 2009 to 2010. The focus of this paper is to estimate the effect of vaccination on infection risk and choose a model that best fits the infection data.  相似文献   

5.
In seasonal influenza epidemics, pathogens such as respiratory syncytial virus (RSV) often co-circulate with influenza and cause influenza-like illness (ILI) in human hosts. However, it is often impractical to test for each potential pathogen or to collect specimens for each observed ILI episode, making inference about influenza transmission difficult. In the setting of infectious diseases, missing outcomes impose a particular challenge because of the dependence among individuals. We propose a Bayesian competing-risk model for multiple co-circulating pathogens for inference on transmissibility and intervention efficacies under the assumption that missingness in the biological confirmation of the pathogen is ignorable. Simulation studies indicate a reasonable performance of the proposed model even if the number of potential pathogens is misspecified. They also show that a moderate amount of missing laboratory test results has only a small impact on inference about key parameters in the setting of close contact groups. Using the proposed model, we found that a non-pharmaceutical intervention is marginally protective against transmission of influenza A in a study conducted in elementary schools.  相似文献   

6.
Tests of space-time clustering such as the Knox test are used by epidemiologists in the preliminary analysis of datasets where an infectious aetiology is suspected. The Knox test statistic is the number of cases close in both space and time to another case. The test statistic proposed here is the excess number of such cases over that expected under H0 of no infection. It is argued that this modified test is more powerful than the Knox test, because the test statistic is not heavily tied as is the Knox test statistic. The use of the test is illustrated with examples.  相似文献   

7.
Poisson regression and case-crossover are frequently used methods to estimate transient risks of environmental exposures such as particulate air pollution on acute events such as mortality. Roughly speaking, a case-crossover design results from a Poisson regression by conditioning on the total number of failures. We show that the case-crossover design is somewhat more generally applicable than Poisson regression. Stratification in the case-crossover design is analogous to Poisson regression with dummy variables, or to a marked Poisson regression. Poisson regression makes it possible to express case-crossover likelihood functions as multinomial likelihoods without making reference to cases, controls, or matching. This derivation avoids the counterintuitive notion of basing inferences on exposures that occur post-failure.  相似文献   

8.
The design of infectious disease studies has received little attention because they are generally viewed as observational studies. That is, epidemic and endemic disease transmission happens and we observe it. We argue here that statistical design often provides useful guidance for such studies with regard to type of data and the size of the data set to be collected. It is shown that data on disease transmission in part of the community enables the estimation of central parameters and it is possible to compute the sample size required to make inferences with a desired precision. We illustrate this for data on disease transmission in a single community of uniformly mixing individuals and for data on outbreak sizes in households. Data on disease transmission is usually incomplete and this creates an identifiability problem for certain parameters of multitype epidemic models. We identify designs that can overcome this problem for the important objective of estimating parameters that help to assess the effectiveness of a vaccine. With disease transmission in animal groups there is greater scope for conducting planned experiments and we explore some possibilities for such experiments. The topic is largely unexplored and numerous open research problems in the area of statistical design of infectious disease data are mentioned.  相似文献   

9.
Summary.  A common finding in analyses of geographic mobility is a strong association between past movement and current mobility. We argue that one of the driving forces behind this pattern is the strength of local social ties outside the household. We use data from the British Household Panel Survey on the location of the three closest friends and the frequency of meetings with them. We estimate the processes of friendship formation and residential mobility jointly, allowing for correlation between the two processes. Our results show that a larger number of close friends living nearby substantially reduces movement of 20 miles or more.  相似文献   

10.
Models of infectious disease over contact networks offer a versatile means of capturing heterogeneity in populations during an epidemic. Highly connected individuals tend to be infected at a higher rate early during an outbreak than those with fewer connections. A powerful approach based on the probability generating function of the individual degree distribution exists for modelling the mean field dynamics of outbreaks in such a population. We develop the same idea in a stochastic context, by proposing a comprehensive model for 1‐week‐ahead incidence counts. Our focus is inferring contact network (and other epidemic) parameters for some common degree distributions, in the case when the network is non‐homogeneous ‘at random’. Our model is initially set within a susceptible–infectious–removed framework, then extended to the susceptible–infectious–removed–susceptible scenario, and we apply this methodology to influenza A data.  相似文献   

11.
In phase III clinical trials, some adverse events may not be rare or unexpected and can be considered as a primary measure for safety, particularly in trials of life-threatening conditions, such as stroke or traumatic brain injury. In some clinical areas, efficacy endpoints may be highly correlated with safety endpoints, yet the interim efficacy analyses under group sequential designs usually do not consider safety measures formally in the analyses. Furthermore, safety is often statistically monitored more frequently than efficacy measures. Because early termination of a trial in this situation can be triggered by either efficacy or safety, the impact of safety monitoring on the error probabilities of efficacy analyses may be nontrivial if the original design does not take the multiplicity effect into account. We estimate the actual error probabilities for a bivariate binary efficacy-safety response in large confirmatory group sequential trials. The estimated probabilities are verified by Monte Carlo simulation. Our findings suggest that type I error for efficacy analyses decreases as efficacy-safety correlation or between-group difference in the safety event rate increases. In addition, although power for efficacy is robust to misspecification of the efficacy-safety correlation, it decreases dramatically as between-group difference in the safety event rate increases.  相似文献   

12.
The success of a seasonal influenza vaccine efficacy trial depends not only upon the design but also upon the annual epidemic characteristics. In this context, simulation methods are an essential tool in evaluating the performances of study designs under various circumstances. However, traditional methods for simulating time‐to‐event data are not suitable for the simulation of influenza vaccine efficacy trials because of the seasonality and heterogeneity of influenza epidemics. Instead, we propose a mathematical model parameterized with historical surveillance data, heterogeneous frailty among the subjects, survey‐based heterogeneous number of daily contact, and a mixed vaccine protection mechanism. We illustrate our methodology by generating multiple‐trial data similar to a large phase III trial that failed to show additional relative vaccine efficacy of an experimental adjuvanted vaccine compared with the reference vaccine. We show that small departures from the designing assumptions, such as a smaller range of strain protection for the experimental vaccine or the chosen endpoint, could lead to smaller probabilities of success in showing significant relative vaccine efficacy. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
The author notes that "for calculating income distribution and expenditure by groups of households, annual averages are required concerning the number of private households by household groups and...the structure of the household members in a socio-economic classification." Problems of adjusting such household data so that they are compatible with data from other sources such as employment statistics are discussed, and a procedure for adjusting annual micro-census data on households in the Federal Republic of Germany is described. "Results for the year 1982 are presented, showing the main development trends as compared with the year 1972. Finally, the quality of forecasting with the adjustment procedure is studied." (summary in ENG)  相似文献   

14.
We present a three-stage, nonparametric estimation procedure to recover willingness to pay for housing attributes. In the first stage we estimate a nonparametric hedonic home price function. In the second stage we recover each consumer's taste parameters for product characteristics using first-order conditions for utility maximization. Finally, in the third stage we estimate the distribution of household tastes as a function of household demographics. As an application of our methods, we compare alternative explanations for why blacks choose to live in center cities while whites suburbanize.  相似文献   

15.
We study application of the Exponential Tilt Model (ETM) to compare survival distributions in two groups. The ETM assumes a parametric form for the density ratio of the two distributions. It accommodates a broad array of parametric models such as the log-normal and gamma models and can be sufficiently flexible to allow for crossing hazard and crossing survival functions. We develop a nonparametric likelihood approach to estimate ETM parameters in the presence of censoring and establish related asymptotic results. We compare the ETM to the Proportional Hazards Model (PHM) in simulation studies. When the proportional hazards assumption is not satisfied but the ETM assumption is, the ETM has better power for testing the hypothesis of no difference between the two groups. And, importantly, when the ETM relation is not satisfied but the PHM assumption is, the ETM can still have power reasonably close to that of the PHM. Application of the ETM is illustrated by a gastrointestinal tumor study.  相似文献   

16.
Summary.  The number of people to select within selected households has significant consequences for the conduct and output of household surveys. The operational and data quality implications of this choice are carefully considered in many surveys, but the effect on statistical efficiency is not well understood. The usual approach is to select all people in each selected household, where operational and data quality concerns make this feasible. If not, one person is usually selected from each selected household. We find that this strategy is not always justified, and we develop intermediate designs between these two extremes. Current practices were developed when household survey field procedures needed to be simple and robust; however, more complex designs are now feasible owing to the increasing use of computer-assisted interviewing. We develop more flexible designs by optimizing survey cost, based on a simple cost model, subject to a required variance for an estimator of population total. The innovation lies in the fact that household sample sizes are small integers, which creates challenges in both design and estimation. The new methods are evaluated empirically by using census and health survey data, showing considerable improvement over existing methods in some cases.  相似文献   

17.
We consider the problem of data-based choice of the bandwidth of a kernel density estimator, with an aim to estimate the density optimally at a given design point. The existing local bandwidth selectors seem to be quite sensitive to the underlying density and location of the design point. For instance, some bandwidth selectors perform poorly while estimating a density, with bounded support, at the median. Others struggle to estimate a density in the tail region or at the trough between the two modes of a multimodal density. We propose a scale invariant bandwidth selection method such that the resulting density estimator performs reliably irrespective of the density or the design point. We choose bandwidth by minimizing a bootstrap estimate of the mean squared error (MSE) of a density estimator. Our bootstrap MSE estimator is different in the sense that we estimate the variance and squared bias components separately. We provide insight into the asymptotic accuracy of the proposed density estimator.  相似文献   

18.
Abstract.  In a case–cohort design a random sample from the study cohort, referred as a subcohort, and all the cases outside the subcohort are selected for collecting extra covariate data. The union of the selected subcohort and all cases are referred as the case–cohort set. Such a design is generally employed when the collection of information on an extra covariate for the study cohort is expensive. An advantage of the case–cohort design over more traditional case–control and the nested case–control designs is that it provides a set of controls which can be used for multiple end-points, in which case there is information on some covariates and event follow-up for the whole study cohort. Here, we propose a Bayesian approach to analyse such a case–cohort design as a cohort design with incomplete data on the extra covariate. We construct likelihood expressions when multiple end-points are of interest simultaneously and propose a Bayesian data augmentation method to estimate the model parameters. A simulation study is carried out to illustrate the method and the results are compared with the complete cohort analysis.  相似文献   

19.
Dagum and Slottje (2000) estimated household human capital (HC) as a latent variable (LV) and proposed its monetary estimation by means of an actuarial approach. This paper introduces an improved method for the estimation of household HC as an LV by means of formative and reflective indicators in agreement with the accepted economic definition of HC. The monetary value of HC is used in a recursive causal model to obtain short- and long-term multipliers that measure the direct and total effects of the variables that determine household HC. The new method is applied to estimate US household HC for year 2004.  相似文献   

20.
Household Expenditure Survey (HES) data are widely reported in grouped form for a number of reasons. Only within-group arithmetic means (AMs) of the household expenditures on various consumption items, total expenditure, income . and other variables are reported in the tabular form. However, the use of such within-group AMs introduces biases when the parameters of various commonly used non-linear Engel functions are estimated by the Aitken's generalized least squares (GLS) method. This is because the within-group geometric means (GMs)/harmonic means (HMs) are needed in order to estimate unbiased parameters of those non-linear Engel functions. Kakwani (1977) estimated the within-group GMs/HMs from the Kakwani-Podder (1976) Lorenz curve for Indonesian data. We have extended his method to estimate within-group GMs/HMs to a set of variables, based on a general type of concentration curve. It is shown that our estimated within-group GMs/HMs based on concentration curves are not entirely suitable for the Australian HES data. However, these GMs/HMs are then used to estimate Engel parameters for various non-linear Engel functions and it is seen that these elasticities are different for some items of certain non-linear Engel functions than those when the reported within-group AMs are used as proxies for within-group GMs/HMs in order to estimate those non-linear Engel functions. The concept of the average elasticity of a variable elasticity Engel function is discussed and computed for various Australian household consumption items. It is empirically demonstrated that the average elasticities are more meaningful than the traditional elasticity estimates computed at some representative values for certain functions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号