首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Most disease registries are updated at least yearly. If a geographically localized health hazard suddenly occurs, we would like to have a surveillance system in place that can pick up a new geographical disease cluster as quickly as possible, irrespective of its location and size. At the same time, we want to minimize the number of false alarms. By using a space–time scan statistic, we propose and illustrate a system for regular time periodic disease surveillance to detect any currently 'active' geographical clusters of disease and which tests the statistical significance of such clusters adjusting for the multitude of possible geographical locations and sizes, time intervals and time periodic analyses. The method is illustrated on thyroid cancer among men in New Mexico 1973–1992.  相似文献   

2.
Summary.  We review some prospective scan-based methods that are used in health-related applications to detect increased rates of mortality or morbidity and to detect bioterrorism or active clusters of disease. We relate these methods to the use of the moving average chart in industrial applications. Issues that are related to the performance evaluation of spatiotemporal scan-based methods are discussed. In particular we clarify the definition of a recurrence interval and demonstrate that this measure does not reflect some important aspects of the statistical performance of scan-based, and other, surveillance methods. Some research needs in this area are given.  相似文献   

3.
The conditional mixture likelihood method using the absolute difference of the trait values of a sib pair to estimate genetic parameters underlies commonly used method in linkage analysis. Here, the statistical properties of the model are examined. The marginal model with a pseudo-likelihood function based on a sample of the absolute difference of sib-traits is also studied. Both approaches are compared numerically. When genotyping is much more expensive than screening a quantitative trait, it is known that extremely discordant sib pairs provide more powerful linkage tests than randomly sampled sib pairs. The Fisher information about genetic parameters contained in extremely discordant sib pairs is calculated using the marginal mixture model. Our results supplement current research showing that extremely discordant sib pairs are powerful for the linkage detection by demonstrating they also contain more information about other genetic parameters.  相似文献   

4.
Testing the existence of a quantitative trait locus (QTL) effect is an important task in QTL mapping studies. Most studies concentrate on the case where the phenotype distributions of different QTL groups follow normal distributions with the same unknown variance. In this paper we make a more general assumption that the phenotype distributions come from a location-scale distribution family. We derive the limiting distribution of the likelihood ratio test (LRT) for the existence of the QTL effect in both location and scale in genetic backcross studies. We further identify an explicit representation for this limiting distribution. As a complement, we study the limiting distribution of the LRT and its explicit representation for the existence of the QTL effect in the location only. The asymptotic properties of the LRTs under a local alternative are also investigated. Simulation studies are used to evaluate the asymptotic results, and a real-data example is included for illustration.  相似文献   

5.
6.
Zhu and Zhang [Zhu, W., &; Zhang, H. (2009). Why do we test multiple traits in genetic association studies. Journal of the Korean Statistical Society, 38(1), 1–10] publish a paper “Why Do We Test Multiple Traits in Genetic Association Studies?” in this issue. The authors used linear structural equations and acyclic graph as tools to explore the performance of testing multiple traits simultaneously by large-scale simulations for various genetic models. The methods, conclusions and results are of great interest in quantitative genetics. Diseases are caused by dynamic interaction among many genes and many environmental exposures through regulation and metabolism. In the past several decades, researchers have primarily focused on (1) the role of individual genetic variation in determining the diseases and (2) one single trait at a time. Little attention has been paid to determining how the genetic variations and environmental perturbation are integrated into networks which act together to dynamically alter regulations and metabolism leading to the emergence of complex phenotypes and diseases. Pending conceptual and statistical challenges are (1) how to identify networks involved in molecular phenotypes and endpoint clinical phenotypes under perturbation of environments and (2) how to connect DNA variation to disease outcomes through gene regulations and cellular intermediate traits. Structural equations and graphical models of multiple quantitative traits provide a general framework for developing novel analytic strategies for identifying the path from genomic information coupled with the environmental exposures, through gene expressions and other intermediate traits, to the clinical endpoints of complex diseases, to meet the above conceptual and statistical challenges. In this discussion, we use structural equations to analyze multiple intermediate traits of ankylosing spondylitis (AS) as a real example to further demonstrate the importance of network approach to genetic studies of complex traits.  相似文献   

7.
Conventional optimization approaches, such as Linear Programming, Dynamic Programming and Branch-and-Bound methods are well established for solving relatively simple scheduling problems. Algorithms such as Simulated Annealing, Taboo Search and Genetic Algorithms (GA) have recently been applied to large combinatorial problems. Owing to the complex nature of these problems it is often impossible to search the whole problem space and an optimal solution cannot, therefore, be guaranteed. A BiCriteria Genetic Algorithm (BCGA) has been developed for the scheduling of complex products with multiple resource constraints and deep product structure. This GA identifies and corrects infeasible schedules and takes account of the early supply of components and assemblies, late delivery of final products and capacity utilization. The research has used manufacturing data obtained from a capital goods company. Genetic Algorithms include a number of parameters, including the probabilities of crossover and mutation, the population size and the number of generations. The BCGA scheduling tool provides 16 alternative crossover operations and eight different mutation mechanisms. The overall objective of this study was to develop an efficient design-of-experiments approach to identify genetic algorithm operators and parameters that produce solutions with minimum total cost. The case studies were based upon a complex, computationally intensive scheduling problem that was insoluble using conventional approaches. This paper describes an efficient sequential experimental strategy that enabled this work to be performed within a reasonable time. The first stage was a screening experiment, which had a fractional factorial embedded within a half Latin-square design. The second stage was a half-fraction design with a reduced number of GA operators. The results are compared with previous studies. It is demonstrated that, in this case, improved GA performance was achieved using the experimental strategy proposed. The appropriate genetic operators and parameters may be case specific, leading to the view that experimental design may be the best way to proceed when finding the ‘best’ combination of GA operators and parameters.  相似文献   

8.
Conventional optimization approaches, such as Linear Programming, Dynamic Programming and Branch-and-Bound methods are well established for solving relatively simple scheduling problems. Algorithms such as Simulated Annealing, Taboo Search and Genetic Algorithms (GA) have recently been applied to large combinatorial problems. Owing to the complex nature of these problems it is often impossible to search the whole problem space and an optimal solution cannot, therefore, be guaranteed. A BiCriteria Genetic Algorithm (BCGA) has been developed for the scheduling of complex products with multiple resource constraints and deep product structure. This GA identifies and corrects infeasible schedules and takes account of the early supply of components and assemblies, late delivery of final products and capacity utilization. The research has used manufacturing data obtained from a capital goods company. Genetic Algorithms include a number of parameters, including the probabilities of crossover and mutation, the population size and the number of generations. The BCGA scheduling tool provides 16 alternative crossover operations and eight different mutation mechanisms. The overall objective of this study was to develop an efficient design-of-experiments approach to identify genetic algorithm operators and parameters that produce solutions with minimum total cost. The case studies were based upon a complex, computationally intensive scheduling problem that was insoluble using conventional approaches. This paper describes an efficient sequential experimental strategy that enabled this work to be performed within a reasonable time. The first stage was a screening experiment, which had a fractional factorial embedded within a half Latin-square design. The second stage was a half-fraction design with a reduced number of GA operators. The results are compared with previous studies. It is demonstrated that, in this case, improved GA performance was achieved using the experimental strategy proposed. The appropriate genetic operators and parameters may be case specific, leading to the view that experimental design may be the best way to proceed when finding the 'best' combination of GA operators and parameters.  相似文献   

9.
Four teams of analysts try to determine the existence of an association between inflammatory bowel disease and certain genetic markers on human chromosome number 6. Their investigation involves data on several control populations and on 110 familles with two or more affected individuals. The problem is introduced by Mirea, Bull, Silverberg and Siminovitch; they and three other groups (Chen, Kalbfleisch and Romero‐Hidalgo; Darlington and Paterson; Roslin, Loredo‐Osti, Greenwood and Morgan) present analyses. Their approaches are discussed by Field and Smith.  相似文献   

10.
11.
We discuss a general application of categorical data analysis to mutations along the HIV genome. We consider a multidimensional table for several positions at the same time. Due to the complexity of the multidimensional table, we may collapse it by pooling some categories. However, the association between the remaining variables may not be the same as before collapsing. We discuss the collapsibility of tables and the change in the meaning of parameters after collapsing categories. We also address this problem with a log-linear model. We present a parameterization with the consensus output as the reference cell as is appropriate to explain genomic mutations in HIV. We also consider five null hypotheses and some classical methods to address them. We illustrate methods for six positions along the HIV genome, through consideration of all triples of positions.  相似文献   

12.
A discussion of the 1980 U.S. census is presented. The authors suggest that the taking of a national census is not just a statistical exercise, but an exercise involving ethics, epistemology, law, and politics. They contend that conducting a national census can be defined as an ill-structured problem in which the various complexities imposed by multidisciplinarity cannot be separated. "The 1980 census is discussed as an ill-structured problem, and a method for treating such problems is presented, within which statistical information is only one component."  相似文献   

13.
14.
In this paper, asymptotic expansions of the null and non-null distributions of the sphericity test criterion in the case of a complex multivariate normal distribution are obtained for the first time in terms of beta distributions. In the null case, it is found that the accuracy of the approximation by taking the first term alone in the asymptotic series is sufficient for practical purposes. In fact for p - 2. the asymptotic expansion reduces to the first term which is also the exact distribution in this case. Applications of the results to the area of inferences on multivariate time series are also given.  相似文献   

15.
The most common phenomena in the evolution process are natural selection and genetic drift. In this article, we propose a probabilistic method to calculate the mean and variance time for random genetic drift equilibrium, measured as number of generations, based on Markov process and a complex probabilistic model. We studied the case of a constant, panmictic population of diploid organisms, which had a demonstrated lack of mutation, selection or migration for a determined autonomic locus, and two possible alleles, H and h. The calculations presented in this article were based on a Markov process. They explain how genetic and genotypic frequencies changed in different generations and how the heterozygote alleles became extinguished after many generations. This calculation could be used in more evolutionary applications. Finally, some simulations are presented to illustrate the theoretical calculations presented using different basal situations.  相似文献   

16.
Observation of adverse drug reactions during drug development can cause closure of the whole programme. However, if association between the genotype and the risk of an adverse event is discovered, then it might suffice to exclude patients of certain genotypes from future recruitment. Various sequential and non‐sequential procedures are available to identify an association between the whole genome, or at least a portion of it, and the incidence of adverse events. In this paper we start with a suspected association between the genotype and the risk of an adverse event and suppose that the genetic subgroups with elevated risk can be identified. Our focus is determination of whether the patients identified as being at risk should be excluded from further studies of the drug. We propose using a utility function to determine the appropriate action, taking into account the relative costs of suffering an adverse reaction and of failing to alleviate the patient's disease. Two illustrative examples are presented, one comparing patients who suffer from an adverse event with contemporary patients who do not, and the other making use of a reference control group. We also illustrate two classification methods, LASSO and CART, for identifying patients at risk, but we stress that any appropriate classification method could be used in conjunction with the proposed utility function. Our emphasis is on determining the action to take rather than on providing definitive evidence of an association. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

17.
This evaluation method for journals meets the library's objective for a responsible, responsive collection management practice. It provides structure, but not without regard for the unique potential addition to the collection that each journal represents. Although subject to occasional fine-tuning, this method has worked satisfactorily for two hundred journal evaluations completed over the course of several years. It is a well-tested procedure and is adaptable to most library settings, in whole or in part, regardless of size or type of collection.  相似文献   

18.
Assisting fund investors in making better investment decisions when faced with market climate change is an important subject. For this purpose, we adopt a genetic algorithm (GA) to search for an optimal decay factor for an exponential weighted moving average model, which is used to calculate the value at risk combined with risk-adjusted return on capital (RAROC). We then propose a GA-based RAROC model. Next, using the model we find the optimal decay factor and investigate the performance and persistence of 31 Taiwanese open-end equity mutual funds over the period from November 2006 to October 2009, divided into three periods: November 2006–October 2007, November 2007–October 2008, and November 2008–October 2009, which includes the global financial crisis. We find that for three periods, the optimal decay factors are 0.999, 0.951, and 0.990, respectively. The rankings of funds between bull and bear markets are quite different. Moreover, the proposed model improves performance persistence. That is, a fund's past performance will continue into the future.  相似文献   

19.
This paper describes the development of a multivariate statistical process performance monitoring scheme for a high-speed polyester film production facility. The objective for applying multivariate statistical process control (MSPC) was to improve product consistency, detect process changes and disturbances and increase operator awareness of the impact of both routine maintenance and unusual events. The background to MSPC is briefly described and the various stages in the development of an at-line MSPC representation for the production line are described. A number of case studies are used to illustrate the power of the methodology, highlighting its potential to assist in process maintenance, the detection of changes in process operation and the potential for the identification of badly tuned controller loops.  相似文献   

20.
This paper describes the development of a multivariate statistical process performance monitoring scheme for a high-speed polyester film production facility. The objective for applying multivariate statistical process control (MSPC) was to improve product consistency, detect process changes and disturbances and increase operator awareness of the impact of both routine maintenance and unusual events. The background to MSPC is briefly described and the various stages in the development of an at-line MSPC representation for the production line are described. A number of case studies are used to illustrate the power of the methodology, highlighting its potential to assist in process maintenance, the detection of changes in process operation and the potential for the identification of badly tuned controller loops.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号