共查询到20条相似文献,搜索用时 140 毫秒
1.
2.
Zhen Li Dandan Ning Zhichao Song Gang Guo Xiaogang Qiu 《Journal of Statistical Computation and Simulation》2017,87(18):3413-3439
Distributed agent-based simulation is a popular method to realize computational experiment on large-scale artificial society. Meanwhile, the partitioning strategy of the artificial society models among hosts is playing an essential role for simulation engine to offer high execution efficiency as it has great impact on the communication overheads and computational load-balancing during simulation. Aiming at the problem, we firstly analyze the execution and scheduling process of agents during simulation and model it as wide-sense cyclostationary random process. Then, a static statistical partitioning model is proposed to obtain the optimal partitioning strategy with minimum average communication cost and load imbalance factor. To solve the static statistical partitioning model, this paper turns it into a graph-partitioning problem. A statistical movement graph-based partitioning algorithm is then devised which generates task graph model by mining the statistical movement information from initialization data of simulation model. In the experiments, two other popular partitioning methods are used to evaluate the performance of proposed graph-partitioning algorithm. Furthermore, this paper compares the graph-partitioning performance under different task graph model. The results indicate that our proposed statistical movement graph-based static partitioning method outperforms all other methods in reducing the communication overhead while satisfying the load balance constraint. 相似文献
3.
4.
Andrew M. Raim Matthias K. Gobbert Nagaraj K. Neerchal Jorge G. Morel 《Journal of Statistical Computation and Simulation》2013,83(12):2178-2194
Numerical methods are needed to obtain maximum-likelihood estimates (MLEs) in many problems. Computation time can be an issue for some likelihoods even with modern computing power. We consider one such problem where the assumed model is a random-clumped multinomial distribution. We compute MLEs for this model in parallel using the Toolkit for Advanced Optimization software library. The computations are performed on a distributed-memory cluster with low latency interconnect. We demonstrate that for larger problems, scaling the number of processes improves wall clock time significantly. An illustrative example shows how parallel MLE computation can be useful in a large data analysis. Our experience with a direct numerical approach indicates that more substantial gains may be obtained by making use of the specific structure of the random-clumped model. 相似文献
5.
Line-intercept sampling (Becker, 1991) and network sampling (Becker et al., 1998) seem to be the most appropriate procedures
for estimating animal abundance in a study area on the basis of tracks. The purpose of this paper is to investigate the statistical
properties of these alternative procedures by constructing confidence intervals for abundance and comparing the interval performances
in terms of width and coverage. 相似文献
6.
The authors first note that current official U.S. population estimates and projections are based on the assumption that certain characteristics of the institutionalized population remain constant between censuses. The article "examines the empirical validity of this assumption by using data from the decennial censuses for 1940-1980 and, in light of substantial decade to decade changes in the age patterns of the institutional proportions for sex- and race-specific populations, seeks to develop alternative methods." As part of these alternative methods, "parametric curves are fit to the age-specific institutional proportions for each population for each decade. A study of the observed historical variation in the parameters of these curves then leads to some suggestions about how their shapes can be estimated between censuses and projected beyond the latest available census to provide more accurate estimates and projections of the civilian noninstitutional population." This is a revised version of a paper originally presented at the 1984 Annual Meeting of the Population Association of America (see Population Index, Vol. 50, No. 3, Fall 1984, p. 439). 相似文献
7.
Development of anti-cancer therapies usually involve small to moderate size studies to provide initial estimates of response rates before initiating larger studies to better quantify response. These early trials often each contain a single tumor type, possibly using other stratification factors. Response rate for a given tumor type is routinely reported as the percentage of patients meeting a clinical criteria (e.g. tumor shrinkage), without any regard to response in the other studies. These estimates (called maximum likelihood estimates or MLEs) on average approximate the true value, but have variances that are usually large, especially for small to moderate size studies. The approach presented here is offered as a way to improve overall estimation of response rates when several small trials are considered by reducing the total uncertainty.The shrinkage estimators considered here (James-Stein/empirical Bayes and hierarchical Bayes) are alternatives that use information from all studies to provide potentially better estimates for each study. While these estimates introduce a small bias, they have a considerably smaller variance, and thus tend to be better in terms of total mean squared error. These procedures provide a better view of drug performance in that group of tumor types as a whole, as opposed to estimating each response rate individually without consideration of the others. In technical terms, the vector of estimated response rates is nearer the vector of true values, on average, than the vector of the usual unbiased MLEs applied to such trials. 相似文献
8.
Yasushi Nagata 《统计学通讯:理论与方法》2013,42(5):985-1004
In this paper we consider the Neyman accuracy and the Wolfowitz accuracy of the Stein type improved confidence interval I?S for the disturbance variance in a linear regression model. The Neyman accuracy is a measure related to the unbiasedness of a confidence interval, and the Wolfowitz accuracy is related to the closeness of the endpoints to the true parameter. We show that I?S is not unbiased and give some numerical results for the Neyman accuracy. As for the Wolfowitz accuracy we derive the sufficient condition for I?S to improve on the usual confidence interval under this criterion and show numerically that a large degree of improvement can be obtainted. 相似文献
9.
MICHAIL PAPATHOMAS 《Scandinavian Journal of Statistics》2008,35(1):169-185
Abstract. We introduce a fully parametric approach for updating beliefs regarding correlated binary variables, after marginal probability assessments based on information of varying quality are provided by an expert. This approach allows for the calculation of a predictive joint density for future assessments. The proposed methodology offers new insight into the parameters that control the dependence of the binary variables, and the relation of these parameters to the joint density of the probability assessments. A comprehensible elicitation procedure for the model parameters is put forward. The approach taken is motivated and illustrated through a practical application. 相似文献
10.
The problems of estimation and hypotheses testing on the parameters of two correlated linear models are discussed. Such models are known to have direct applications in epidemiologic research, particularly in the field of family studies. When the data are unbalanced, the maximum-likelihood estimation of the parameters is achieved by adopting a fairly simple numerical algorithm. The asymptotic variances and covariances of the estimators are derived, and the procedures are illustrated on arterial-blood-pressure data from the literature. 相似文献
11.
12.
在统计组织体系上,我国有两大统计体系:一是由国家统计局、县级以上地方政府人民政府统计机构、乡镇统计员和乡镇信息网络、城市、农村和企业三支抽样调查队构成的政府统计系统,或者称国家统计系统;另一个是由国务院和地方各级人民政府各部门统计机构,或者由统计负责人构成的部门统计系统。这两大统计系统是我国获得统计资料,进行科学决策的主要信息源。如何对这两大统计系统进行科学合理的分工,充分发挥各自的优势,快速、准确、全面、方便、低成本向各级党政领导机关提供统计资料,是新形势下必须解决的问题。本文试探讨之。 相似文献
13.
14.
Long memory in conditional variance is one of the empirical features exhibited by many financial time series. One class of
models that was suggested to capture this behavior is the so-called Fractionally Integrated GARCH (Baillie, Bollerslev and
Mikkelsen 1996) in which the ideas of fractional integration originally introduced by Granger (1980) and Hosking (1981) for
processes for the mean are applied to a GARCH framework. In this paper we derive analytic expressions for the second-order
derivatives of the log-likelihood function of FIGARCH processes with a view to the advantages that can be gained in computational
speed and estimation accuracy. The comparison is computationally intensive given the typical sample size of the time series
involved and the way the likelihood function is built. An illustration is provided on exchange rate and stock index data.
A preliminary version of this paper was presented at the conference S.Co. 2001 in Bressanone. We would like to thank Silvano
Bordignon for being an insightful and constructive discussant and Luisa Bisaglia and Giorgio Calzolari for providing useful
comments. We also thank Tim Bollerslev for providing the data on the DEM/USD exchange rate used in Baillie, Bollerslev and
Mikkelsen (1996). 相似文献
15.
一、中央苏区调查统计工作的特点和作用 (一 )中央苏区调查统计工作的特点1 统计工作的重要性。统计工作是认识社会的利器。要在中国进行革命斗争 ,就必须了解和把握中国的国情 ,这就需要进行社会调查。通过调查统计 ,取得翔实的资料 ,为指导工作和制订政策提供可靠的依据。 1930年 5月 ,毛泽东在《调查工作》中指出“没有调查就没有发言权”并强调“调查就是解决问题” ,一切结论产生于调查情况的末尾 ,而不在它的先头。他还要求 :“凡担负指导工作的人 ,从乡政府主席到全国中央政府主席 ,从大队长到总司令 ,从支部书记到总书记 ,一定… 相似文献
16.
Misclassifications in binary responses have long been a common problem in medical and health surveys. One way to handle misclassifications in clustered or longitudinal data is to incorporate the misclassification model through the generalized estimating equation (GEE) approach. However, existing methods are developed under a non-survey setting and cannot be used directly for complex survey data. We propose a pseudo-GEE method for the analysis of binary survey responses with misclassifications. We focus on cluster sampling and develop analysis strategies for analyzing binary survey responses with different forms of additional information for the misclassification process. The proposed methodology has several attractive features, including simultaneous inferences for both the response model and the association parameters. Finite sample performance of the proposed estimators is evaluated through simulation studies and an application using a real dataset from the Canadian Longitudinal Study on Aging. 相似文献
17.
Asymptotics and Criticality for a Correlated Bernoulli Process 总被引:1,自引:0,他引:1
C.C. Heyde 《Australian & New Zealand Journal of Statistics》2004,46(1):53-57
A generalized binomial distribution that was proposed by Drezner & Farnum in 1993 comes from a correlated Bernoulli process with interesting asymptotic properties which differ strikingly in the neighbourhood of a critical point. The basic asymptotics and a short‐range/long‐range dependence dichotomy are presented in this note. 相似文献
18.
《统计学通讯:理论与方法》2013,42(7):1517-1531
ABSTRACT Scale equivariant estimators of the common variance σ2, of correlated normal random variables, have mean squared errors (MSE) which depend on the unknown correlations. For this reason, a scale equivariant estimator of σ2 which uniformly minimizes the MSE does not exist. For the equi-correlated case, we have developed three equivariant estimators of σ2: a Bayesian estimator for invariant prior as well as two non-Bayesian estimators. We then generalized these three estimators for the case of several variables with multiple unknown correlations. In addition, we developed a system of confidence intervals which produce the desired coverage probability while being efficient in terms of expected length. 相似文献
19.
Statistics and Computing - 相似文献
20.
Khangelani Zuma 《统计学通讯:理论与方法》2013,42(4):725-730
In epidemiological studies where subjects are seen periodically on follow-up visits, interval-censored data occur naturally. The exact time the change of state (such as HIV seroconversion) occurs is not known exactly, only that it occurred within some time interval. In multi-stage sampling or partner tracing studies, individuals are grouped into smaller subgroups. Individuals within a subgroup share an unobservable specific frailty which induces correlation within the subgroup. In this paper, we consider a Bayesian model for analysing correlated interval-censored data. Parameters are estimated using the Markov chain Monte Carlo methods, specifically the Gibbs sampler. 相似文献