首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper proposes a methodology to model the mobility of characters in Massively Multiplayer On-line (MMO) Games. We propose to model the mobility of characters in the map of an MMO game as a jump process using two approaches to model the times spent in the states of the process: parametric and non-parametric. Furthermore, a simulator for the mobility is presented. We analyze geographic position data of the characters in the map of the game World of Warcraft and compare the observed and simulated data. The proposed methodology and the simulator can be used to optimize computing load allocation of servers, which is extremely important for game performance, service quality and cost.  相似文献   

2.
This paper deals with the analysis of multivariate survival data from a Bayesian perspective using Markov-chain Monte Carlo methods. The Metropolis along with the Gibbs algorithm is used to calculate some of the marginal posterior distributions. A multivariate survival model is proposed, since survival times within the same group are correlated as a consequence of a frailty random block effect. The conditional proportional-hazards model of Clayton and Cuzick is used with a martingale structured prior process (Arjas and Gasbarra) for the discretized baseline hazard. Besides the calculation of the marginal posterior distributions of the parameters of interest, this paper presents some Bayesian EDA diagnostic techniques to detect model adequacy. The methodology is exemplified with kidney infection data where the times to infections within the same patients are expected to be correlated.  相似文献   

3.
The modeling and analysis of lifetime data in which the main endpoints are the times when an event of interest occurs is of great interest in medical studies. In these studies, it is common that two or more lifetimes associated with the same unit such as the times to deterioration levels or the times to reaction to a treatment in pairs of organs like lungs, kidneys, eyes or ears. In medical applications, it is also possible that a cure rate is present and needed to be modeled with lifetime data with long-term survivors. This paper presented a comparative study under a Bayesian approach among some existing continuous and discrete bivariate distributions such as the bivariate exponential distributions and the bivariate geometric distributions in presence of cure rate, censored data and covariates. In presence of lifetimes related to cured patients, it is assumed standard mixture cure rate models in the data analysis. The posterior summaries of interest are obtained using Markov Chain Monte Carlo methods. To illustrate the proposed methodology two real medical data sets are considered.  相似文献   

4.
A novel fully Bayesian approach for modeling survival data with explanatory variables using the Piecewise Exponential Model (PEM) with random time grid is proposed. We consider a class of correlated Gamma prior distributions for the failure rates. Such prior specification is obtained via the dynamic generalized modeling approach jointly with a random time grid for the PEM. A product distribution is considered for modeling the prior uncertainty about the random time grid, turning possible the use of the structure of the Product Partition Model (PPM) to handle the problem. A unifying notation for the construction of the likelihood function of the PEM, suitable for both static and dynamic modeling approaches, is considered. Procedures to evaluate the performance of the proposed model are provided. Two case studies are presented in order to exemplify the methodology. For comparison purposes, the data sets are also fitted using the dynamic model with fixed time grid established in the literature. The results show the superiority of the proposed model.  相似文献   

5.
Inverse regression estimation for censored data   总被引:1,自引:0,他引:1  
An inverse regression methodology for assessing predictor performance in the censored data setup is developed along with inference procedures and a computational algorithm. The technique developed here allows for conditioning on the unobserved failure time along with a weighting mechanism that accounts for the censoring. The implementation is nonparametric and computationally fast. This provides an efficient methodological tool that can be used especially in cases where the usual modeling assumptions are not applicable to the data under consideration. It can also be a good diagnostic tool that can be used in the model selection process. We have provided theoretical justification of consistency and asymptotic normality of the methodology. Simulation studies and two data analyses are provided to illustrate the practical utility of the procedure.  相似文献   

6.
金玉国 《统计研究》2011,28(1):91-98
 按照内容的沿革,计量经济模型分为经典计量经济模型和非经典计量经济模型。非经典计量经济建模方法论既是经典计量经济建模方法论的发展和延伸,又在建模理念、建模方法、模型应用等方面有着很大的不同。本文从数据类型、模型变量、建模对象、参数形式、建模思想等几个方面,对非经典计量经济建模方法论进行了系统的梳理、归纳和分析,重点比较了其与经典计量经济建模方法论的不同特征,对计量经济模型方法论发展演进的规律进行了初步的总结。  相似文献   

7.
An important problem in reliability and survival analysis is that of modeling degradation together with any observed failures in a life test. Here, based on a continuous cumulative damage approach with a Gaussian process describing degradation, a general accelerated test model is presented in which failure times and degradation measures can be combined for inference about system lifetime. Some specific models when the drift of the Gaussian process depends on the acceleration variable are discussed in detail. Illustrative examples using simulated data as well as degradation data observed in carbon-film resistors are presented.  相似文献   

8.
In many medical studies, patients are followed longitudinally and interest is on assessing the relationship between longitudinal measurements and time to an event. Recently, various authors have proposed joint modeling approaches for longitudinal and time-to-event data for a single longitudinal variable. These joint modeling approaches become intractable with even a few longitudinal variables. In this paper we propose a regression calibration approach for jointly modeling multiple longitudinal measurements and discrete time-to-event data. Ideally, a two-stage modeling approach could be applied in which the multiple longitudinal measurements are modeled in the first stage and the longitudinal model is related to the time-to-event data in the second stage. Biased parameter estimation due to informative dropout makes this direct two-stage modeling approach problematic. We propose a regression calibration approach which appropriately accounts for informative dropout. We approximate the conditional distribution of the multiple longitudinal measurements given the event time by modeling all pairwise combinations of the longitudinal measurements using a bivariate linear mixed model which conditions on the event time. Complete data are then simulated based on estimates from these pairwise conditional models, and regression calibration is used to estimate the relationship between longitudinal data and time-to-event data using the complete data. We show that this approach performs well in estimating the relationship between multivariate longitudinal measurements and the time-to-event data and in estimating the parameters of the multiple longitudinal process subject to informative dropout. We illustrate this methodology with simulations and with an analysis of primary biliary cirrhosis (PBC) data.  相似文献   

9.
The explicit estimators of the parameters α, μ?and?σ2 are obtained by using the methodology known as modified maximum likelihood (MML) when the distribution of the first occurrence time of an event is assumed to be Weibull in series process. The efficiencies of the MML estimators are compared with the corresponding nonparametric (NP) estimators and it is shown that the proposed estimators have higher efficiencies than the NP estimators. In this study, we extend these results to the case, where the distribution of the first occurrence time is Gamma. It is another widely used and well-known distribution in reliability analysis. A real data set taken from the literature is analyzed at the end of the study for better understanding the methodology presented in this paper.  相似文献   

10.
A class of prior distributions for multivariate autoregressive models is presented. This class of priors is built taking into account the latent component structure that characterizes a collection of autoregressive processes. In particular, the state-space representation of a vector autoregressive process leads to the decomposition of each time series in the multivariate process into simple underlying components. These components may have a common structure across the series. A key feature of the proposed priors is that they allow the modeling of such common structure. This approach also takes into account the uncertainty in the number of latent processes, consequently handling model order uncertainty in the multivariate autoregressive framework. Posterior inference is achieved via standard Markov chain Monte Carlo (MCMC) methods. Issues related to inference and exploration of the posterior distribution are discussed. We illustrate the methodology analyzing two data sets: a synthetic data set with quasi-periodic latent structure, and seasonally adjusted US monthly housing data consisting of housing starts and housing sales over the period 1965 to 1974.  相似文献   

11.
Missing covariates data is a common issue in generalized linear models (GLMs). A model-based procedure arising from properly specifying joint models for both the partially observed covariates and the corresponding missing indicator variables represents a sound and flexible methodology, which lends itself to maximum likelihood estimation as the likelihood function is available in computable form. In this paper, a novel model-based methodology is proposed for the regression analysis of GLMs when the partially observed covariates are categorical. Pair-copula constructions are used as graphical tools in order to facilitate the specification of the high-dimensional probability distributions of the underlying missingness components. The model parameters are estimated by maximizing the weighted log-likelihood function by using an EM algorithm. In order to compare the performance of the proposed methodology with other well-established approaches, which include complete-cases and multiple imputation, several simulation experiments of Binomial, Poisson and Normal regressions are carried out under both missing at random and non-missing at random mechanisms scenarios. The methods are illustrated by modeling data from a stage III melanoma clinical trial. The results show that the methodology is rather robust and flexible, representing a competitive alternative to traditional techniques.  相似文献   

12.
Recently statistical process control (SPC) methodologies have been developed to accommodate autocorrelated data. A primary method to deal with autocorrelated data is the use of residual charts. Although this methodology has the advantage that it can be applied to any autocorrelated data it needs time series modeling efforts. In addition for a X residual chart the detection capability is sometimes small compared to the X chart and EWMA chart. Zhang (1998) proposed the EWMAST chart which is constructed by charting the EWMA statistic for stationary processes to monitor the process mean. The performance of the EWMAST chart the X chart the X residual chart and other charts were compared in Zhang (1998). In this paper comparisons are made among the EWMAST chart the CUSUM residual chart and EWMA residual chart as well as the X residual chart and X chart via the average run length.  相似文献   

13.
Typical joint modeling of longitudinal measurements and time to event data assumes that two models share a common set of random effects with a normal distribution assumption. But, sometimes the underlying population that the sample is extracted from is a heterogeneous population and detecting homogeneous subsamples of it is an important scientific question. In this paper, a finite mixture of normal distributions for the shared random effects is proposed for considering the heterogeneity in the population. For detecting whether the unobserved heterogeneity exits or not, we use a simple graphical exploratory diagnostic tool proposed by Verbeke and Molenberghs [34] to assess whether the traditional normality assumption for the random effects in the mixed model is adequate. In the joint modeling setting, in the case of evidence against normality (homogeneity), a finite mixture of normals is used for the shared random-effects distribution. A Bayesian MCMC procedure is developed for parameter estimation and inference. The methodology is illustrated using some simulation studies. Also, the proposed approach is used for analyzing a real HIV data set, using the heterogeneous joint model for this data set, the individuals are classified into two groups: a group with high risk and a group with moderate risk.  相似文献   

14.
Count data with excess zeros are common in many biomedical and public health applications. The zero-inflated Poisson (ZIP) regression model has been widely used in practice to analyze such data. In this paper, we extend the classical ZIP regression framework to model count time series with excess zeros. A Markov regression model is presented and developed, and the partial likelihood is employed for statistical inference. Partial likelihood inference has been successfully applied in modeling time series where the conditional distribution of the response lies within the exponential family. Extending this approach to ZIP time series poses methodological and theoretical challenges, since the ZIP distribution is a mixture and therefore lies outside the exponential family. In the partial likelihood framework, we develop an EM algorithm to compute the maximum partial likelihood estimator (MPLE). We establish the asymptotic theory of the MPLE under mild regularity conditions and investigate its finite sample behavior in a simulation study. The performances of different partial-likelihood based model selection criteria are compared in the presence of model misspecification. Finally, we present an epidemiological application to illustrate the proposed methodology.  相似文献   

15.
Efficient estimation of the regression coefficients in longitudinal data analysis requires a correct specification of the covariance structure. If misspecification occurs, it may lead to inefficient or biased estimators of parameters in the mean. One of the most commonly used methods for handling the covariance matrix is based on simultaneous modeling of the Cholesky decomposition. Therefore, in this paper, we reparameterize covariance structures in longitudinal data analysis through the modified Cholesky decomposition of itself. Based on this modified Cholesky decomposition, the within-subject covariance matrix is decomposed into a unit lower triangular matrix involving moving average coefficients and a diagonal matrix involving innovation variances, which are modeled as linear functions of covariates. Then, we propose a fully Bayesian inference for joint mean and covariance models based on this decomposition. A computational efficient Markov chain Monte Carlo method which combines the Gibbs sampler and Metropolis–Hastings algorithm is implemented to simultaneously obtain the Bayesian estimates of unknown parameters, as well as their standard deviation estimates. Finally, several simulation studies and a real example are presented to illustrate the proposed methodology.  相似文献   

16.
This paper introduces a new bivariate exponential distribution, called the Bivariate Affine-Linear Exponential distribution, to model moderately negative dependent data. The construction and characteristics of the proposed bivariate distribution are presented along with estimation procedures for the model parameters based on maximum likelihood and objective Bayesian analysis. We derive Jeffreys prior and discuss its frequentist properties based on a simulation study and MCMC sampling techniques. A real data set of mercury concentration in largemouth bass from Florida lakes is used to illustrate the methodology.  相似文献   

17.
It has been recently revealed that the Shewhart control charts with variable sampling interval (VSI) perform better than the traditional Shewhart chart with the fixed sampling interval in detecting shifts in the process. In most of these research works, the normality and independency of the process data or measurements are assumed and that the process is subjected to only one assignable cause. While, in practice, these assumptions usually do not hold, some recent studies are focused on working with only one or two of these violations. In this paper, the situation in which the process data are correlated and follow a non-normal distribution and that there is multiplicity of assignable causes in the process is considered. For this case, a cost model for the economic design of the VSI X? control chart is developed, where the Burr distribution is employed to represent the non-normal distribution of the process data. To obtain the optimal values of the design parameters, a genetic algorithm is employed in which the response surface methodology is applied. A numerical example is presented to show the applicability and effectiveness of the proposed methodology. Sensitivity analysis is also carried out to evaluate the effects of cost and input parameters on the performance of the chart.  相似文献   

18.
A new methodology for selecting a Bayesian network for continuous data outside the widely used class of multivariate normal distributions is developed. The ‘copula DAGs’ combine directed acyclic graphs and their associated probability models with copula C/D-vines. Bivariate copula densities introduce flexibility in the joint distributions of pairs of nodes in the network. An information criterion is studied for graph selection tailored to the joint modeling of data based on graphs and copulas. Examples and simulation studies show the flexibility and properties of the method.  相似文献   

19.
ABSTRACT

This paper analyses the behaviour of the goodness-of-fit tests for regression models. To this end, it uses statistics based on an estimation of the integrated regression function with missing observations either in the response variable or in some of the covariates. It proposes several versions of one empirical process, constructed from a previous estimation, that uses only the complete observations or replaces the missing observations with imputed values. In the case of missing covariates, a link model is used to fill the missing observations with other complete covariates. In all the situations, Bootstrap methodology is used to calibrate the distribution of the test statistics. A broad simulation study compares the different procedures based on empirical regression methodology, with smoothed tests previously studied in the literature. The comparison reflects the effect of the correlation between the covariates in the tests based on the imputed sample for missing covariates. In addition, the paper proposes a computational binning strategy to evaluate the tests based on an empirical process for large data sets. Finally, two applications to real data illustrate the performance of the tests.  相似文献   

20.
Many empirical studies are planned with the prior knowledge that some of the data may be missed. This knowledge is seldom explicitly incorporated into the experiment design process for lack of a candid methodology. This paper proposes an index related to the expected determinant of the information matrix as a criterion for planning block designs. Due to the intractable nature of the expected determinantal criterion an analytic expression is presented only for a simple 2x2 layout. A first order Taylor series approximation function is suggested for larger layouts. Ranges over which this approximation is adequate are shown via Monte Carlo simulations. The robustness of information in the block design relative to the completely randomized design with missing data is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号