首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Time-series count data with excessive zeros frequently occur in environmental, medical and biological studies. These data have been traditionally handled by conditional and marginal modeling approaches separately in the literature. The conditional modeling approaches are computationally much simpler, whereas marginal modeling approaches can link the overall mean with covariates directly. In this paper, we propose new models that can have conditional and marginal modeling interpretations for zero-inflated time-series counts using compound Poisson distributed random effects. We also develop a computationally efficient estimation method for our models using a quasi-likelihood approach. The proposed method is illustrated with an application to air pollution-related emergency room visits. We also evaluate the performance of our method through simulation studies.  相似文献   

2.
We introduce a new class of distributions called the Weibull Marshall–Olkin-G family. We obtain some of its mathematical properties. The special models of this family provide bathtub-shaped, decreasing-increasing, increasing-decreasing-increasing, decreasing-increasing-decreasing, monotone, unimodal and bimodal hazard functions. The maximum likelihood method is adopted for estimating the model parameters. We assess the performance of the maximum likelihood estimators by means of two simulation studies. We also propose a new family of linear regression models for censored and uncensored data. The flexibility and importance of the proposed models are illustrated by means of three real data sets.  相似文献   

3.
A regression model assuming Poisson-dia distributed data. with autocorrelated errors falls into the class of regression models that; have the error structure which is both heteroscedastic and autocorrelated. In general, this class of regression models are not estimable. However, due to the properties of the Poisson distribution that the variance is equal to the mean, this regression model on Poisson-distributed data with autocorrelated. errors is estimable. In this note the special structure of the covarlance matrix of the model with the first order auto-correlated error Is derived utilizing this property, A method based on the least squares method of Frome, Kutner, and Beauchamp (1973), supplemented by steps for handling autocorrelation in studies of time series analysis, nonlinear regression, and econometrics is presented for obtaining generalized least squares estimates for the parameters of the model.  相似文献   

4.
Survival models with continuous-time data are still superior methods of survival analysis. However when the survival data is discrete, taking it as continuous leads the researchers to incorrect results and interpretations. The discrete-time survival model has some advantages in applications such as it can be used for non-proportional hazards, time-varying covariates and tied observations. However, it has a disadvantage about the reconstruction of the survival data and working with big data sets. Actuaries are often rely on complex and big data whereas they have to be quick and efficient for short period analysis. Using the mass always creates inefficient processes and consumes time. Therefore sampling design becomes more and more important in order to get reliable results. In this study, we take into account sampling methods in discrete-time survival model using a real data set on motor insurance. To see the efficiency of the proposed methodology we conducted a simulation study.  相似文献   

5.
A bivariate generalized linear model is developed as a mixture distribution with one component of the mixture being discrete with probability mass only at the origin. The use of the proposed model is illustrated by analyzing local area meteorological measurements with constant correlation structure that incorporates predictor variables. The Monte Carlo study is performed to evaluate the inferential efficiency of model parameters for two types of true models. These results suggest that the estimates of regression parameters are consistent and the efficiency of the inference increases for the proposed model for ρ≥0.50 especially in larger samples. As an illustration of a bivariate generalized linear model, we analyze a precipitation monitoring data of adjacent local stations for Tokyo and Yokohama.  相似文献   

6.
Finite mixture models are currently used to analyze heterogeneous longitudinal data. By releasing the homogeneity restriction of nonlinear mixed-effects (NLME) models, finite mixture models not only can estimate model parameters but also cluster individuals into one of the pre-specified classes with class membership probabilities. This clustering may have clinical significance, which might be associated with a clinically important binary outcome. This article develops a joint modeling of a finite mixture of NLME models for longitudinal data in the presence of covariate measurement errors and a logistic regression for a binary outcome, linked by individual latent class indicators, under a Bayesian framework. Simulation studies are conducted to assess the performance of the proposed joint model and a naive two-step model, in which finite mixture model and logistic regression are fitted separately, followed by an application to a real data set from an AIDS clinical trial, in which the viral dynamics and dichotomized time to the first decline of CD4/CD8 ratio are analyzed jointly.  相似文献   

7.
In this article, the estimation of the bivariate survival function for one modified form of current-status data is considered. Two types of estimators, which are generalizations of the estimators by Campbell and Földes [G. Campbell and A. Földes, Large sample properties of nonparametric statistical inference, in Nonparametric Statistical Inference, B.V. Gnredenko, M.L. Puri, and I. Vineze, eds., North-Holland, Amsterdam, 1982, pp. 103–122] and Dabrowska [D.M. Dabrowska, Kaplan-Meier estimate on the plane, Ann. Stat. 18 (1988), pp. 1475–1489; D.M. Dabrowska, Kaplan-Meier estimate on the plane: weak convergence, LIL, and the bootstrap, J. Multivariate Anal. 29 (1989), pp. 308–325], are proposed. The consistency of the proposed estimators is established. A simulation study is conducted to investigate the performance of the proposed estimators.  相似文献   

8.
This paper presents a new parametric model for recurrent events, in which the time of each recurrence is associated to one or multiple latent causes and no information is provided about the responsible cause for the event. This model is characterized by a rate function and it is based on the Poisson-exponential distribution, namely the distribution of the maximum among a random number (truncated Poisson distributed) of exponential times. The time of each recurrence is then given by the maximum lifetime value among all latent causes. Inference is based on a maximum likelihood approach. A simulation study is performed in order to observe the frequentist properties of the estimation procedure for small and moderate sample sizes. We also investigated likelihood-based tests procedures. A real example from a gastroenterology study concerning small bowel motility during fasting state is used to illustrate the methodology. Finally, we apply the proposed model to a real data set and compare it with the classical Homogeneous Poisson model, which is a particular case.  相似文献   

9.
10.
This paper presents nonparametric two-sample bootstrap tests for means of random symmetric positive-definite (SPD) matrices according to two different metrics: the Frobenius (or Euclidean) metric, inherited from the embedding of the set of SPD metrics in the Euclidean set of symmetric matrices, and the canonical metric, which is defined without an embedding and suggests an intrinsic analysis. A fast algorithm is used to compute the bootstrap intrinsic means in the case of the latter. The methods are illustrated in a simulation study and applied to a two-group comparison of means of diffusion tensors (DTs) obtained from a single voxel of registered DT images of children in a dyslexia study.  相似文献   

11.
Wang  Chunjie  Zhao  Bo  Luo  Linlin  Song  Xinyuan 《Lifetime data analysis》2021,27(3):413-436
Lifetime Data Analysis - Current status data occur in many fields including demographical, epidemiological, financial, medical, and sociological studies. We consider the regression analysis of...  相似文献   

12.
A hierarchical Bayesian factor model for multivariate spatially correlated data is proposed. Multiple cancer incidence data in Scotland are jointly analyzed, looking for common components, able to detect etiological factors of diseases hidden behind the data. The proposed method searches factor scores incorporating a dependence within observations due to a geographical structure. The great flexibility of the Bayesian approach allows the inclusion of prior opinions about adjacent regions having highly correlated observable and latent variables. The proposed model is an extension of a model proposed by Rowe (2003a) and starts from the introduction of separable covariance matrix for the observations. A Gibbs sampling algorithm is implemented to sample from the posterior distributions.  相似文献   

13.
This paper considers the estimation of Cobb-Douglas production functions using panel data covering a large sample of companies observed for a small number of time periods. GMM estimatorshave been found to produce large finite-sample biases when using the standard first-differenced estimator. These biases can be dramatically reduced by exploiting reasonable stationarity restrictions on the initial conditions process. Using data for a panel of R&Dperforming US manufacturing companies we find that the additional instruments used in our extended GMM estimator yield much more reasonable parameter estimates.  相似文献   

14.
This paper considers the estimation of Cobb-Douglas production functions using panel data covering a large sample of companies observed for a small number of time periods. GMM estimatorshave been found to produce large finite-sample biases when using the standard first-differenced estimator. These biases can be dramatically reduced by exploiting reasonable stationarity restrictions on the initial conditions process. Using data for a panel of R&Dperforming US manufacturing companies we find that the additional instruments used in our extended GMM estimator yield much more reasonable parameter estimates.  相似文献   

15.
For analyzing recurrent event data, either total time scale or gap time scale is adopted according to research interest. In particular, gap time scale is known to be more appropriate for modeling a renewal process. In this paper, we adopt gap time scale to analyze recurrent event data with repeated observation gaps which cannot be observed completely because of unknown termination times of observation gaps. In order to estimate termination times, interval-censored mechanism is applied. Simulation studies are done to compare the suggested methods with the unadjusted method ignoring incomplete observation gaps. As a real example, conviction data set with suspensions is analyzed with suggested methods.  相似文献   

16.
There have been a number of procedures used to analyze non-monotonic binary data to predict the probability of response. Some classical procedures are the Up and Down strategy, the Robbins–Monro procedure, and other sequential optimization designs. Recently, nonparametric procedures such as kernel regression and local linear regression (llogr) have been applied to this type of data. It is a well known fact that kernel regression has problems fitting the data near the boundaries and a drawback with local linear regression is that it may be “too linear” when fitting data from a curvilinear function. The procedure introduced in this paper is called local logistic regression, which fits a logistic regression function at each of the data points. An example is given using United States Army projectile data that supports the use of local logistic regression when analyzing non-monotonic binary data for certain response curves. Properties of local logistic regression will be presented along with simulation results that indicate some of the strengths of the procedure.  相似文献   

17.
In many longitudinal studies of recurrent events there is an interest in assessing how recurrences vary over time and across treatments or strata in the population. Usual analyses of such data assume a parametric form for the distribution of the recurrences over time. Here, we consider a semiparametric model for the analysis of such longitudinal studies where data are collected as panel counts. The model is a non-homogeneous Poisson process with a multiplicative intensity incorporating covariates through a proportionality assumption. Heterogeneity is accounted for in the model through subject-specific random effects. The key feature of the model is the use of regression splines to model the distribution of recurrences over time. This provides a flexible and robust method of relaxing parametric assumptions. In addition, quasi-likelihood methods are proposed for estimation, requiring only first and second moment assumptions to obtain consistent estimates. Simulations demonstrate that the method produces estimators of the rate with low bias and whose standardized distributions are well approximated by the normal. The usefulness of this approach, especially as an exploratory tool, is illustrated by analyzing a study designed to assess the effectiveness of a pheromone treatment in disturbing the mating habits of the Cherry Bark Tortrix moth.  相似文献   

18.
The sales promotion data resulting from multiple marketing strategies are usually autocorrelated. Consequently, the characteristics of those data sets can be analyzed using time-series and/or intervention analysis. Traditional time-series intervention analysis focuses on the effects of single or few interventions, and forecasts may be obtained as long as the future interventions can be assured. This study is different from traditional approaches, and considers the cases in which multiple interventions and the uncertainty of future interventions exist in the system. In addition, this study utilizes a set of real sales promotion data to demonstrate the effectiveness of the proposed approach.  相似文献   

19.
A contaminated beta model $(1-\gamma) B(1,1) + \gamma B(\alpha,\beta)$ is often used to describe the distribution of $P$ ‐values arising from a microarray experiment. The authors propose and examine a different approach: namely, using a contaminated normal model $(1-\gamma) N(0,\sigma^2) + \gamma N(\mu,\sigma^2)$ to describe the distribution of $Z$ statistics or suitably transformed $T$ statistics. The authors then address whether a researcher who has $Z$ statistics should analyze them using the contaminated normal model or whether the $Z$ statistics should be converted to $P$ ‐values to be analyzed using the contaminated beta model. The authors also provide a decision‐theoretic perspective on the analysis of $Z$ statistics. The Canadian Journal of Statistics 38: 315–332; 2010 © 2010 Statistical Society of Canada  相似文献   

20.
In this paper, we consider a regression analysis for a missing data problem in which the variables of primary interest are unobserved under a general biased sampling scheme, an outcome‐dependent sampling (ODS) design. We propose a semiparametric empirical likelihood method for accessing the association between a continuous outcome response and unobservable interesting factors. Simulation study results show that ODS design can produce more efficient estimators than the simple random design of the same sample size. We demonstrate the proposed approach with a data set from an environmental study for the genetic effects on human lung function in COPD smokers. The Canadian Journal of Statistics 40: 282–303; 2012 © 2012 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号