首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper extends an optimal frequency domain test for the detection of synchronous patterns in multiple time series to the case of fuzzy patterns, which are not confined to single frequencies or narrow frequency bands. Applying this extension to corn futures with different delivery dates, we obtain significant results only for the spreads between the different contracts but not for the original contracts, which is an indication that spread trading has the advantage of increased predictability.  相似文献   

2.
For some, gambling is a harmless pleasure. For others it is a wrecker of lives. Charities such as Gamblers Anonymous (gamblersanonymous.org.uk) and Gamcare (Gamcare.org.uk) exist to help such people; but why do they find themselves addicted to gambling and unable to resist its allure? Gambling behaviour is complex, yet most psychological research has confined itself to narrow areas of specialisation. Dr Mark Griffiths tells us that a more integrated approach is needed to understand the power of the gamble.  相似文献   

3.
Abstract. In any epidemic, there may exist an unidentified subpopulation which might be naturally immune or isolated and who will not be involved in the transmission of the disease. Estimation of key parameters, for example, the basic reproductive number, without accounting for this possibility would underestimate the severity of the epidemics. Here, we propose a procedure to estimate the basic reproductive number ( R 0 ) in an epidemic model with an unknown initial number of susceptibles. The infection process is usually not completely observed, but is reconstructed by a kernel‐smoothing method under a counting process framework. Simulation is used to evaluate the performance of the estimators for major epidemics. We illustrate the procedure using the Abakaliki smallpox data.  相似文献   

4.
Considerable progress has been made in applying Markov chain Monte Carlo (MCMC) methods to the analysis of epidemic data. However, this likelihood based method can be inefficient due to the limited data available concerning an epidemic outbreak. This paper considers an alternative approach to studying epidemic data using Approximate Bayesian Computation (ABC) methodology. ABC is a simulation-based technique for obtaining an approximate sample from the posterior distribution of the parameters of the model and in an epidemic context is very easy to implement. A new approach to ABC is introduced which generates a set of values from the (approximate) posterior distribution of the parameters during each simulation rather than a single value. This is based upon coupling simulations with different sets of parameters and we call the resulting algorithm coupled ABC. The new methodology is used to analyse final size data for epidemics amongst communities partitioned into households. It is shown that for the epidemic data sets coupled ABC is more efficient than ABC and MCMC-ABC.  相似文献   

5.
We consider the optimal design of controlled experimental epidemics or transmission experiments, whose purpose is to inform the practitioner about disease transmission and recovery rates. Our methodology employs Gaussian diffusion approximations, applicable to epidemics that can be modeled as density-dependent Markov processes and involving relatively large numbers of organisms. We focus on finding (i) the optimal times at which to collect data about the state of the system for a small number of discrete observations, (ii) the optimal numbers of susceptible and infective individuals to begin an experiment with, and (iii) the optimal number of replicate epidemics to use. We adopt the popular D-optimality criterion as providing an appropriate objective function for designing our experiments, since this leads to estimates with maximum precision, subject to valid assumptions about parameter values. We demonstrate the broad applicability of our methodology using a diverse array of compartmental epidemic models: a time-homogeneous SIS epidemic, a time-inhomogeneous SI epidemic with exponentially decreasing transmission rates and a partially observed SIR epidemic where the infectious period for an individual has a gamma distribution.  相似文献   

6.
We consider delays that occur in the reporting of events such as cases of a reportable disease or insurance claims. Estimation of the number of events that have occurred but not yet been reported (OBNR events) is then important. Current methods of doing this do not allow random temporal fluctuations in reporting delays, and consequently, confidence or prediction limits on OBNR events tend to be too narrow. We develop an approach that uses recent reporting data and incorporates random effects, thus leading to more reasonable and robust predictions  相似文献   

7.
Back-projection is a commonly used method in reconstructing HIV incidence. Instead of using AIDS incidence data in back-projection, this paper uses HIV positive tests data. Both multinomial and Poisson settings are used. The two settings give similar results when a parametric form or step function is assumed for the infection curve. However, this may not be true when the HIV infection in each year is characterized by a different parameter. This paper attempts to use simulation studies to compare these two settings by constructing various scenarios for the infection curve. Results show that both methods give approximately the same estimates of the number of HIV infections in the past, whilst the estimates for HIV infections in the recent past differ a lot. The multinomial setting always gives a levelling-off pattern for the recent past, while the Poisson setting is more sensitive to the change in the shape of the HIV infection curve. Nonetheless, the multinomial setting gives a relatively narrower point-wise probability interval. When the size of the epidemic is large, the narrow probability interval may be under-estimating the true underlying variation.  相似文献   

8.
张萃  符航 《统计研究》2021,38(8):111-120
作为重大突发公共卫生事件,传染病疫情风险是一个值得关注的前沿新论题。本文从一个较新的网络拓扑视角,以新冠肺炎疫情为例,构建了一个由病毒感染人群流动形成的城市间疫情关联网络,并探讨了城市在疫情关联网络中的位置对其疫情风险的影响。研究发现,城市之间的传染病疫情呈现出紧密的网络关联性;疫情的风险程度与城市在疫情关联网络中的位置密切相关,处在网络重要位置的城市与其他城市关联度较高,从而面临更大的疫情风险,这一点在针对城市群和交通枢纽聚集性感染风险研究中尤为突出。拓展分析表明,城市网络中心度具有疫情扩散风险放大效应,关闭离汉通道措施有助于降低城市在疫情关联网络中的核心位置对本城市疫情风险的影响。  相似文献   

9.
A Partial Likelihood Estimator of Vaccine Efficacy   总被引:1,自引:0,他引:1  
A partial likelihood method is proposed for estimating vaccine efficacy for a general epidemic model. In contrast to the maximum likelihood estimator (MLE) which requires complete observation of the epidemic, the suggested method only requires information on the sequence in which individuals are infected and not the exact infection times. A simulation study shows that the method performs almost as well as the MLE. The method is applied to data on the infectious disease mumps.  相似文献   

10.
This paper is concerned with methods for the numerical calculation of the final outcome distribution for a well-known stochastic epidemic model in a closed population. The model is of the SIR (Susceptible→Infected→ Removed) type, and the infectious period can have any specified distribution. The final outcome distribution is specified by the solution of a triangular system of linear equations, but the form of the distribution leads to inherent numerical problems in the solution. Here we employ multiple precision arithmetic to surmount these problems. As applications of our methodology, we assess the accuracy of two approximations that are frequently used in practice, namely an approximation for the probability of an epidemic occurring, and a Gaussian approximation to the final number infected in the event of an outbreak. We also present an example of Bayesian inference for the epidemic threshold parameter.  相似文献   

11.
An epidemic model for the spread of an infectious disease in a population of families is considered. The score test of the hypothesis that there is no higher infectivity between family members is constructed under the assumption that the epidemic process is observed continuously up to some time t . The score process is a martingale as a function of t and by letting the number of families tend to infinity, a central limit theorem for the process can be proved. The central limit theorem not only justifies a normal approximation of the test statistic—it also suggests a smaller variance estimator than expected.  相似文献   

12.
A suggestion mad in David (1963) has been partially misinterpreted by Castello and Wolfe (1987). Their proposed procedure is critically discussed. The general issue of what constitutes an appropriate test under possibly dependent observations is not confined to the method of paired comparisons.  相似文献   

13.
A block-structured transient Markov process is introduced to describe an epidemic spreading within two linked populations, of carriers and susceptibles. The epidemic terminates as soon as there are no more carriers or susceptibles present in the population. Our purpose is to determine the distribution of the final susceptible and carrier states, and of any integral path for the susceptible process. The transient epidemic state is also briefly discussed. Then, the model is extended to allow the recovery of infected individuals. Finally, several particular models, some known, are used for illustration.  相似文献   

14.
The size of the affected population with HIV/AIDS is a vital question asked by healthcare providers. A statistical procedure called Back-calculation has been the most widely used method to answer that question. Recent discussions suggest that this method is gradually becoming less appropriate for reliable incidence and prevalence estimates, as it does not take into account the effect of treatment. In spite of this, in the current paper that method and a worst-case scenario are used to assess the quality of previous projections and obtain new ones. The first problem faced was the need to account for reporting delays, no reporting and underreporting. The adjusted AIDS incidence data were then used to obtain lower bounds on the size of the AIDS epidemic, using the back-calculation methodology. A Weibull and Gamma distribution was considered for the latency period distribution. The EM algorithm was applied to obtain maximum likelihood estimates of the HIV incidence. The density of infection times was parameterized as a step function. The methodology is applied to AIDS incidence in Portugal for four different transmission categories (injecting drug users, heterosexual, homo/bisexual and other) to obtain short-term projections (2002–2005) and an estimate of the minimum size of the epidemic.  相似文献   

15.
Many epidemic models approximate social contact behavior by assuming random mixing within mixing groups (e.g., homes, schools, and workplaces). The effect of more realistic social network structure on estimates of epidemic parameters is an open area of exploration. We develop a detailed statistical model to estimate the social contact network within a high school using friendship network data and a survey of contact behavior. Our contact network model includes classroom structure, longer durations of contacts to friends than non-friends and more frequent contacts with friends, based on reports in the contact survey. We performed simulation studies to explore which network structures are relevant to influenza transmission. These studies yield two key findings. First, we found that the friendship network structure important to the transmission process can be adequately represented by a dyad-independent exponential random graph model (ERGM). This means that individual-level sampled data is sufficient to characterize the entire friendship network. Second, we found that contact behavior was adequately represented by a static rather than dynamic contact network. We then compare a targeted antiviral prophylaxis intervention strategy and a grade closure intervention strategy under random mixing and network-based mixing. We find that random mixing overestimates the effect of targeted antiviral prophylaxis on the probability of an epidemic when the probability of transmission in 10 minutes of contact is less than 0.004 and underestimates it when this transmission probability is greater than 0.004. We found the same pattern for the final size of an epidemic, with a threshold transmission probability of 0.005. We also find random mixing overestimates the effect of a grade closure intervention on the probability of an epidemic and final size for all transmission probabilities. Our findings have implications for policy recommendations based on models assuming random mixing, and can inform further development of network-based models.  相似文献   

16.
Summary.  In an outbreak of a completely new infectious disease like severe acute respiratory syndrome (SARS), estimation of the fatality rate over the course of the epidemic is of clinical and epidemiological importance. In contrast with the constant case fatality rate, a new measure, termed the 'realtime' fatality rate, is proposed for monitoring the new emerging epidemic at a population level. A competing risk model implemented via a counting process is used to estimate the realtime fatality rate in an epidemic of SARS. It can capture and reflect the time-varying nature of the fatality rate over the course of the outbreak in a timely and accurate manner. More importantly, it can provide information on the efficacy of a certain treatment and management policy for the disease. The method has been applied to the SARS data from the regions affected, namely Hong Kong, Singapore, Toronto, Taiwan and Beijing. The magnitudes and patterns of the estimated fatalities are virtually the same except in Beijing, which has a lower rate. It is speculated that the effect is linked to the different treatment protocols that were used. The standard estimate of the case fatality rate that was used by the World Health Organization has been shown to be unable to provide useful information to monitor the time-varying fatalities that are caused by the epidemic.  相似文献   

17.
Since the first properly randomized control trial of streptomycin for pulmonary tuberculosis in the late 1940s, society has made great advances in combating bacterial infections and in developing vaccines to prevent such infections. One constant challenge that anti‐bacterial clinical development must grapple with is to determine the potential benefit of newer agents over existing agents, in an era when anti‐bacterial resistance is a constantly shifting target. By contrast, the development of anti‐fungal agents went into high gear only in the late 1980s and early 1990s in an effort to manage fungal infections in cancer patients receiving chemotherapy, especially in patients with haematologic malignancies, bone marrow transplantation, or lymphoma. The pursuit of anti‐fungal agents intensified with the AIDS epidemic. The evaluation of anti‐fungal agents often faces complications brought on by competing risks in situations where the underlying infections are associated with a high chance of mortality or severe morbidity. In this paper, we use four case studies to illustrate some of the challenges and opportunities in developing anti‐bacterial and anti‐fungal agents. The illustrations touch on not only statistical issues, but also issues related to the availability of new anti‐bacterials in the future. Some suggestions on how statisticians could take advantage of the opportunities and answer to the challenges are also included. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
A multitype epidemic model is analysed assuming proportionate mixing between types. Estimation procedures for the susceptibilities and infectivities are derived for three sets of data: complete data, meaning that the whole epidemic process is observed continuously; the removal processes are observed continuously; only the final state is observed. Under the assumption of a major outbreak in a population of size n it is shown that, for all three data sets, the susceptibility estimators are always efficient, i.e. consistent with a √ n rate of convergence. The infectivity estimators are 'in most cases' respectively efficient, efficient and unidentifiable. However, if some susceptibilities are equal then the corresponding infectivity estimators are respectively barely consistent (√log( n ) rate of convergence), not consistent and unidentifiable. The estimators are applied to simulated data.  相似文献   

19.
Data augmentation is required for the implementation of many Markov chain Monte Carlo (MCMC) algorithms. The inclusion of augmented data can often lead to conditional distributions from well‐known probability distributions for some of the parameters in the model. In such cases, collapsing (integrating out parameters) has been shown to improve the performance of MCMC algorithms. We show how integrating out the infection rate parameter in epidemic models leads to efficient MCMC algorithms for two very different epidemic scenarios, final outcome data from a multitype SIR epidemic and longitudinal data from a spatial SI epidemic. The resulting MCMC algorithms give fresh insight into real‐life epidemic data sets.  相似文献   

20.
Change in the coefficients or the mean of the innovation of an INAR(p) process is a sign of disturbance that is important to detect. The proposed methods can test for change in any one of these quantities separately, or in any collection of them. They make both one-sided and two-sided tests possible, furthermore, they can be used to test against the “epidemic” alternative. The tests are based on a CUSUM process using CLS estimators of the parameters. Under the one-sided and two-sided alternatives, consistency of the tests is proved and the properties of the change-point estimator are also explored.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号