首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
An analytic methodology for patient enrollment modeling using a Poisson-gamma model is developed by Anisimov & Fedorov (2005–2007). For modeling hierarchic processes associated with enrollment, a new methodology using evolving stochastic processes is proposed. This provides rather general and unified framework to describe various operational processes associated with enrollment. The technique for calculating predictive distributions, mean, and credibility bounds for evolving processes is developed. Some applications to modeling operational characteristics in clinical trials are considered with focus to modeling events associated with incoming and follow-up patients in different settings. For these models, predictive characteristics are derived in a closed form.  相似文献   

2.
Multivariate (or interchangeably multichannel) autoregressive (MCAR) modeling of stationary and nonstationary time series data is achieved doing things one channel at-a-time using only scalar computations on instantaneous data. The one channel at-a-time modeling is achieved as an instantaneous response multichannel autoregressive model with orthogonal innovations variance. Conventional MCAR models are expressible as linear algebraic transformations of the instantaneous response orthogonal innovations models. By modeling multichannel time series one channel at-a-time, the problems of modeling multichannel time series are reduced to problems in the modeling of scalar autoregressive time series. The three longstanding time series modeling problems of achieving a relatively parsimonious MCAR representation, of multichannel stationary time series spectral estimation and of the modeling of nonstationary covariance time series are addressed using this paradigm.  相似文献   

3.
A novel fully Bayesian approach for modeling survival data with explanatory variables using the Piecewise Exponential Model (PEM) with random time grid is proposed. We consider a class of correlated Gamma prior distributions for the failure rates. Such prior specification is obtained via the dynamic generalized modeling approach jointly with a random time grid for the PEM. A product distribution is considered for modeling the prior uncertainty about the random time grid, turning possible the use of the structure of the Product Partition Model (PPM) to handle the problem. A unifying notation for the construction of the likelihood function of the PEM, suitable for both static and dynamic modeling approaches, is considered. Procedures to evaluate the performance of the proposed model are provided. Two case studies are presented in order to exemplify the methodology. For comparison purposes, the data sets are also fitted using the dynamic model with fixed time grid established in the literature. The results show the superiority of the proposed model.  相似文献   

4.
The exponentiated Gumbel model has been shown to be useful in climate modeling including global warming problem, flood frequency analysis, offshore modeling, rainfall modeling, and wind speed modeling. Here, we consider estimation of the probability density function (PDF) and the cumulative distribution function (CDF) of the exponentiated Gumbel distribution. The following estimators are considered: uniformly minimum variance unbiased (UMVU) estimator, maximum likelihood (ML) estimator, percentile (PC) estimator, least-square (LS) estimator, and weighted least-square (WLS) estimator. Analytical expressions are derived for the bias and the mean squared error. Simulation studies and real data applications show that the ML estimator performs better than others.  相似文献   

5.
In many medical studies, patients are followed longitudinally and interest is on assessing the relationship between longitudinal measurements and time to an event. Recently, various authors have proposed joint modeling approaches for longitudinal and time-to-event data for a single longitudinal variable. These joint modeling approaches become intractable with even a few longitudinal variables. In this paper we propose a regression calibration approach for jointly modeling multiple longitudinal measurements and discrete time-to-event data. Ideally, a two-stage modeling approach could be applied in which the multiple longitudinal measurements are modeled in the first stage and the longitudinal model is related to the time-to-event data in the second stage. Biased parameter estimation due to informative dropout makes this direct two-stage modeling approach problematic. We propose a regression calibration approach which appropriately accounts for informative dropout. We approximate the conditional distribution of the multiple longitudinal measurements given the event time by modeling all pairwise combinations of the longitudinal measurements using a bivariate linear mixed model which conditions on the event time. Complete data are then simulated based on estimates from these pairwise conditional models, and regression calibration is used to estimate the relationship between longitudinal data and time-to-event data using the complete data. We show that this approach performs well in estimating the relationship between multivariate longitudinal measurements and the time-to-event data and in estimating the parameters of the multiple longitudinal process subject to informative dropout. We illustrate this methodology with simulations and with an analysis of primary biliary cirrhosis (PBC) data.  相似文献   

6.
Time-series count data with excessive zeros frequently occur in environmental, medical and biological studies. These data have been traditionally handled by conditional and marginal modeling approaches separately in the literature. The conditional modeling approaches are computationally much simpler, whereas marginal modeling approaches can link the overall mean with covariates directly. In this paper, we propose new models that can have conditional and marginal modeling interpretations for zero-inflated time-series counts using compound Poisson distributed random effects. We also develop a computationally efficient estimation method for our models using a quasi-likelihood approach. The proposed method is illustrated with an application to air pollution-related emergency room visits. We also evaluate the performance of our method through simulation studies.  相似文献   

7.
Clustered binary responses are often found in ecological studies. Data analysis may include modeling the marginal probability response. However, when the association is the main scientific focus, modeling the correlation structure between pairs of responses is the key part of the analysis. Second-order generalized estimating equations (GEE) are established in the literature. Some of them are more efficient in computational terms, especially facing large clusters. Alternating logistic regression (ALR) and orthogonalized residual (ORTH) GEE methods are presented and compared in this paper. Simulation results show a slightly superiority of ALR over ORTH. Marginal probabilities and odds ratios are also estimated and compared in a real ecological study involving a three-level hierarchical clustering. ALR and ORTH models are useful for modeling complex association structure with large cluster sizes.  相似文献   

8.
Abstract

Robust parameter design (RPD) is an effective tool, which involves experimental design and strategic modeling to determine the optimal operating conditions of a system. The usual assumptions of RPD are that normally distributed experimental data and no contamination due to outliers. And generally the parameter uncertainties in response models are neglected. However, using normal theory modeling methods for a skewed data and ignoring parameter uncertainties can create a chain of degradation in optimization and production phases such that misleading fit, poor estimated optimal operating conditions, and poor quality products. This article presents a new approach based on confidence interval (CI) response modeling for the process mean. The proposed interval robust design makes the system median unbiased for the mean and uses midpoint of the interval as a measure of location performance response. As an alternative robust estimator for the process variance response modeling, using biweight midvariance is proposed which is both resistant and robust of efficiency where normality is not met. The results further show that the proposed interval robust design gives a robust solution to the skewed structure of the data and to contaminated data. The procedure and its advantages are illustrated using two experimental design studies.  相似文献   

9.
In disease mapping, health outcomes measured at the same spatial locations may be correlated, so one can consider joint modeling the multivariate health outcomes accounting for their dependence. The general approaches often used for joint modeling include shared component models and multivariate models. An alternative way to model the association between two health outcomes, when one outcome can naturally serve as a covariate of the other, is to use ecological regression model. For example, in our application, preterm birth (PTB) can be treated as a predictor for low birth weight (LBW) and vice versa. Therefore, we proposed to blend the ideas from joint modeling and ecological regression methods to jointly model the relative risks for LBW and PTBs over the health districts in Saskatchewan, Canada, in 2000–2010. This approach is helpful when proxy of areal-level contextual factors can be derived based on the outcomes themselves when direct information on risk factors are not readily available. Our results indicate that the proposed approach improves the model fit when compared with the conventional joint modeling methods. Further, we showed that when no strong spatial autocorrelation is present, joint outcome modeling using only independent error terms can still provide a better model fit when compared with the separate modeling.  相似文献   

10.
We consider regression modeling of survival data subject to right censoring when the full effect of some covariates (e.g. treatment) may be delayed. Several models are proposed, and methods for computing the maximum likelihood estimates of the parameters are described. Consistency and asymptotic normality properties of the estimators are derived. Some numerical examples are used to illustrate the implementation of the modeling and estimation procedures. Finally we apply the theory to interim data from a large scale randomized clinical trial for the prevention of skin cancer.  相似文献   

11.
Within the context of California's public report of coronary artery bypass graft (CABG) surgery outcomes, we first thoroughly review popular statistical methods for profiling healthcare providers. Extensive simulation studies are then conducted to compare profiling schemes based on hierarchical logistic regression (LR) modeling under various conditions. Both Bayesian and frequentist's methods are evaluated in classifying hospitals into ‘better’, ‘normal’ or ‘worse’ service providers. The simulation results suggest that no single method would dominate others on all accounts. Traditional schemes based on LR tend to identify too many false outliers, while those based on hierarchical modeling are relatively conservative. The issue of over shrinkage in hierarchical modeling is also investigated using the 2005–2006 California CABG data set. The article provides theoretical and empirical evidence in choosing the right methodology for provider profiling.  相似文献   

12.
Multivariate stochastic volatility models with skew distributions are proposed. Exploiting Cholesky stochastic volatility modeling, univariate stochastic volatility processes with leverage effect and generalized hyperbolic skew t-distributions are embedded to multivariate analysis with time-varying correlations. Bayesian modeling allows this approach to provide parsimonious skew structure and to easily scale up for high-dimensional problem. Analyses of daily stock returns are illustrated. Empirical results show that the time-varying correlations and the sparse skew structure contribute to improved prediction performance and Value-at-Risk forecasts.  相似文献   

13.
随着大数据和网络的不断发展,网络调查越来越广泛,大部分网络调查样本属于非概率样本,难以采用传统的抽样推断理论进行推断,如何解决网络调查样本的推断问题是大数据背景下网络调查发展的迫切需求。本文首次从建模的角度提出了解决该问题的基本思路:一是入样概率的建模推断,可以考虑构建基于机器学习与变量选择的倾向得分模型来估计入样概率推断总体;二是目标变量的建模推断,可以考虑直接对目标变量建立参数、非参数或半参数超总体模型进行估计;三是入样概率与目标变量的双重建模推断,可以考虑进行倾向得分模型与超总体模型的加权估计与混合推断。最后,以基于广义Boosted模型的入样概率建模推断为例演示了具体解决方法。  相似文献   

14.
Several variable selection procedures are available for continuous time-to-event data. However, if time is measured in a discrete way and therefore many ties occur models for continuous time are inadequate. We propose penalized likelihood methods that perform efficient variable selection in discrete survival modeling with explicit modeling of the heterogeneity in the population. The method is based on a combination of ridge and lasso type penalties that are tailored to the case of discrete survival. The performance is studied in simulation studies and an application to the birth of the first child.  相似文献   

15.
We first describe the time series modeling problem in a general way. Then some specific assumptions and observations which are pertinent to the application of these models are made. We next propose a specific approach to the modeling problem, one which yields efficient, easily calculated estimators of all parameters (under the stated assumptions). Finally, the technique is applied to the problem of modeling the census of a particular hospital.  相似文献   

16.
First- and second-order reliability algorithms (FORM AND SORM) have been adapted for use in modeling uncertainty and sensitivity related to flow in porous media. They are called reliability algorithms because they were developed originally for analysis of reliability of structures. FORM and SORM utilize a general joint probability model, the Nataf model, as a basis for transforming the original problem formulation into uncorrelated standard normal space, where a first-order or second-order estimate of the probability related to some failure criterion can easily be made. Sensitivity measures that incorporate the probabilistic nature of the uncertain variables in the problem are also evaluated, and are quite useful in indicating which uncertain variables contribute the most to the probabilistic outcome. In this paper the reliability approach is reviewed and the advantages and disadvantages compared to other typical probabilistic techniques used for modeling flow and transport. Some example applications of FORM and SORM from recent research by the authors and others are reviewed. FORM and SORM have been shown to provide an attractive alternative to other probabilistic modeling techniques in some situations.  相似文献   

17.
Mixture modeling in general and expectation–maximization in particular are too cumbersome and confusing for applied health researchers. Consequently, the full potential of mixture modeling is not realized. To remedy the deficiency, this tutorial article is prepared. This article addresses important applied problems in survival analysis and handles them in deeper generality than the existing work, especially from the point of view of taking covariates into account. In specific, the article demonstrates the concepts, tools, and inferencial procedure of mixture modeling using head-and-neck cancer data and survival time after heart transplant surgery data.  相似文献   

18.
The theory of max-stable processes generalizes traditional univariate and multivariate extreme value theory by allowing for processes indexed by a time or space variable. We consider a particular class of max-stable processes, known as M4 processes, that are particularly well adapted to modeling the extreme behavior of multiple time series. We develop procedures for determining the order of an M4 process and for estimating the parameters. To illustrate the methods, some examples are given for modeling jumps in returns in multivariate financial time series. We introduce a new measure to quantify and predict the extreme co-movements in price returns.  相似文献   

19.
Abstract

In this article we suggest a new multivariate autoregressive process for modeling time-dependent extreme value distributed observations. The idea behind the approach is to transform the original observations to latent variables that are univariate normally distributed. Then the vector autoregressive DCC model is fitted to the multivariate latent process. The distributional properties of the suggested model are extensively studied. The process parameters are estimated by applying a two-stage estimation procedure. We derive a prediction interval for future values of the suggested process. The results are applied in an empirically study by modeling the behavior of extreme daily stock prices.  相似文献   

20.
Factor analysis of multivariate spatial data is considered. A systematic approach for modeling the underlying structure of potentially irregularly spaced, geo-referenced vector observations is proposed. Statistical inference procedures for selecting the number of factors and for model building are discussed. We derive a condition under which a simple and practical inference procedure is valid without specifying the form of distributions and factor covariance functions. The multivariate prediction problem is also discussed, and a procedure combining the latent variable modeling and a measurement-error-free kriging technique is introduced. Simulation results and an example using agricultural data are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号