首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Hypertension is a highly prevalent cardiovascular disease. It marks a considerable cost factor to many national health systems. Despite its prevalence, regional disease distributions are often unknown and must be estimated from survey data. However, health surveys frequently lack in regional observations due to limited resources. Obtained prevalence estimates suffer from unacceptably large sampling variances and are not reliable. Small area estimation solves this problem by linking auxiliary data from multiple regions in suitable regression models. Typically, either unit- or area-level observations are considered for this purpose. But with respect to hypertension, both levels should be used. Hypertension has characteristic comorbidities and is strongly related to lifestyle features, which are unit-level information. It is also correlated with socioeconomic indicators that are usually measured on the area-level. But the level combination is challenging as it requires multi-level model parameter estimation from small samples. We use a multi-level small area model with level-specific penalization to overcome this issue. Model parameter estimation is performed via stochastic coordinate gradient descent. A jackknife estimator of the mean squared error is presented. The methodology is applied to combine health survey data and administrative records to estimate regional hypertension prevalence in Germany.  相似文献   

3.
The authors propose the use of self‐modelling regression to analyze longitudinal data with time invariant covariates. They model the population time curve with a penalized regression spline and use a linear mixed model for transformation of the time and response scales to fit the individual curves. Fitting is done by an iterative algorithm using off‐the‐shelf linear and nonlinear mixed model software. Their method is demonstrated in a simulation study and in the analysis of tree swallow nestling growth from an experiment that includes an experimentally controlled treatment, an observational covariate and multi‐level sampling.  相似文献   

4.
In this article, small area estimation under a multivariate linear model for repeated measures data is considered. The proposed model aims to get a model which borrows strength both across small areas and over time. The model accounts for repeated surveys, grouped response units, and random effects variations. Estimation of model parameters is discussed within a likelihood based approach. Prediction of random effects, small area means across time points, and per group units are derived. A parametric bootstrap method is proposed for estimating the mean squared error of the predicted small area means. Results are supported by a simulation study.  相似文献   

5.
The author examines the existence, uniqueness, and identifiability of estimators produced by maximum likelihood for a model where the canonical parameter of an exponential family gradually begins to drift from its initial value at an unknown change point. He illustrates these properties with theoretical examples and applies his results to global warming data and failure data for emergency diesel generators.  相似文献   

6.
The authors propose a reduction technique and versions of the EM algorithm and the vertex exchange method to perform constrained nonparametric maximum likelihood estimation of the cumulative distribution function given interval censored data. The constrained vertex exchange method can be used in practice to produce likelihood intervals for the cumulative distribution function. In particular, the authors show how to produce a confidence interval with known asymptotic coverage for the survival function given current status data.  相似文献   

7.
8.
Pharmacokinetic (PK) data often contain concentration measurements below the quantification limit (BQL). While specific values cannot be assigned to these observations, nevertheless these observed BQL data are informative and generally known to be lower than the lower limit of quantification (LLQ). Setting BQLs as missing data violates the usual missing at random (MAR) assumption applied to the statistical methods, and therefore leads to biased or less precise parameter estimation. By definition, these data lie within the interval [0, LLQ], and can be considered as censored observations. Statistical methods that handle censored data, such as maximum likelihood and Bayesian methods, are thus useful in the modelling of such data sets. The main aim of this work was to investigate the impact of the amount of BQL observations on the bias and precision of parameter estimates in population PK models (non‐linear mixed effects models in general) under maximum likelihood method as implemented in SAS and NONMEM, and a Bayesian approach using Markov chain Monte Carlo (MCMC) as applied in WinBUGS. A second aim was to compare these different methods in dealing with BQL or censored data in a practical situation. The evaluation was illustrated by simulation based on a simple PK model, where a number of data sets were simulated from a one‐compartment first‐order elimination PK model. Several quantification limits were applied to each of the simulated data to generate data sets with certain amounts of BQL data. The average percentage of BQL ranged from 25% to 75%. Their influence on the bias and precision of all population PK model parameters such as clearance and volume distribution under each estimation approach was explored and compared. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper a new distribution is proposed. This new model provides more flexibility to modeling data with upside-down bathtub hazard rate function. A significant account of mathematical properties of the new distribution is presented. The maximum likelihood estimators for the parameters in the presence of complete and censored data are presented. Two corrective approaches are considered to derive modified estimators that are bias-free to second order. A numerical simulation is carried out to examine the efficiency of the bias correction. Finally, an application using a real data set is presented in order to illustrate our proposed distribution.  相似文献   

10.
We show that the maximum likelihood estimators (MLEs) of the fixed effects and within‐cluster correlation are consistent in a heteroscedastic nested‐error regression (HNER) model with completely unknown within‐cluster variances under mild conditions. The result implies that the empirical best linear unbiased prediction (EBLUP) method for small area estimation is valid in such a case. We also show that ignoring the heteroscedasticity can lead to inconsistent estimation of the within‐cluster correlation and inferior predictive performance. A jackknife measure of uncertainty for the EBLUP is developed under the HNER model. Simulation studies are carried out to investigate the finite‐sample performance of the EBLUP and MLE under the HNER model, with comparisons to those under the nested‐error regression model in various situations, as well as that of the jackknife measure of uncertainty. The well‐known Iowa crops data is used for illustration. The Canadian Journal of Statistics 40: 588–603; 2012 © 2012 Statistical Society of Canada  相似文献   

11.
In longitudinal data, missing observations occur commonly with incomplete responses and covariates. Missing data can have a ‘missing not at random’ mechanism, a non‐monotone missing pattern, and moreover response and covariates can be missing not simultaneously. To avoid complexities in both modelling and computation, a two‐stage estimation method and a pairwise‐likelihood method are proposed. The two‐stage estimation method enjoys simplicities in computation, but incurs more severe efficiency loss. On the other hand, the pairwise approach leads to estimators with better efficiency, but can be cumbersome in computation. In this paper, we develop a compromise method using a hybrid pairwise‐likelihood framework. Our proposed approach has better efficiency than the two‐stage method, but its computational cost is still reasonable compared to the pairwise approach. The performance of the methods is evaluated empirically by means of simulation studies. Our methods are used to analyse longitudinal data obtained from the National Population Health Study.  相似文献   

12.
The last observation carried forward (LOCF) approach is commonly utilized to handle missing values in the primary analysis of clinical trials. However, recent evidence suggests that likelihood‐based analyses developed under the missing at random (MAR) framework are sensible alternatives. The objective of this study was to assess the Type I error rates from a likelihood‐based MAR approach – mixed‐model repeated measures (MMRM) – compared with LOCF when estimating treatment contrasts for mean change from baseline to endpoint (Δ). Data emulating neuropsychiatric clinical trials were simulated in a 4 × 4 factorial arrangement of scenarios, using four patterns of mean changes over time and four strategies for deleting data to generate subject dropout via an MAR mechanism. In data with no dropout, estimates of Δ and SEΔ from MMRM and LOCF were identical. In data with dropout, the Type I error rates (averaged across all scenarios) for MMRM and LOCF were 5.49% and 16.76%, respectively. In 11 of the 16 scenarios, the Type I error rate from MMRM was at least 1.00% closer to the expected rate of 5.00% than the corresponding rate from LOCF. In no scenario did LOCF yield a Type I error rate that was at least 1.00% closer to the expected rate than the corresponding rate from MMRM. The average estimate of SEΔ from MMRM was greater in data with dropout than in complete data, whereas the average estimate of SEΔ from LOCF was smaller in data with dropout than in complete data, suggesting that standard errors from MMRM better reflected the uncertainty in the data. The results from this investigation support those from previous studies, which found that MMRM provided reasonable control of Type I error even in the presence of MNAR missingness. No universally best approach to analysis of longitudinal data exists. However, likelihood‐based MAR approaches have been shown to perform well in a variety of situations and are a sensible alternative to the LOCF approach. MNAR methods can be used within a sensitivity analysis framework to test the potential presence and impact of MNAR data, thereby assessing robustness of results from an MAR method. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
A marginal–pairwise-likelihood estimation approach is examined in the mixed Rasch model with the binary response and logit link. This method belonging to the broad class of composite likelihood provides estimators with desirable asymptotic properties such as consistency and asymptotic normality. We study the performance of the proposed methodology when the random effect distribution is misspecified. A simulation study was conducted to compare this approach with the maximum marginal likelihood. The different results are also illustrated with an analysis of the real data set from a quality-of-life study.  相似文献   

14.
15.
This paper describes a wavelet method for the estimation of density and hazard rate functions from randomly right-censored data. We adopt a nonparametric approach in assuming that the density and hazard rate have no specific parametric form. The method is based on dividing the time axis into a dyadic number of intervals and then counting the number of events within each interval. The number of events and the survival function of the observations are then separately smoothed over time via linear wavelet smoothers, and then the hazard rate function estimators are obtained by taking the ratio. We prove that the estimators have pointwise and global mean-square consistency, obtain the best possible asymptotic mean integrated squared error convergence rate and are also asymptotically normally distributed. We also describe simulation experiments that show that these estimators are reasonably reliable in practice. The method is illustrated with two real examples. The first uses survival time data for patients with liver metastases from a colorectal primary tumour without other distant metastases. The second is concerned with times of unemployment for women and the wavelet estimate, through its flexibility, provides a new and interesting interpretation.  相似文献   

16.
We address the identifiability and estimation of recursive max‐linear structural equation models represented by an edge‐weighted directed acyclic graph (DAG). Such models are generally unidentifiable and we identify the whole class of DAG s and edge weights corresponding to a given observational distribution. For estimation, standard likelihood theory cannot be applied because the corresponding families of distributions are not dominated. Given the underlying DAG, we present an estimator for the class of edge weights and show that it can be considered a generalized maximum likelihood estimator. In addition, we develop a simple method for identifying the structure of the DAG. With probability tending to one at an exponential rate with the number of observations, this method correctly identifies the class of DAGs and, similarly, exactly identifies the possible edge weights.  相似文献   

17.
The proportional hazards model is the most commonly used model in regression analysis of failure time data and has been discussed by many authors under various situations (Kalbfleisch & Prentice, 2002. The Statistical Analysis of Failure Time Data, Wiley, New York). This paper considers the fitting of the model to current status data when there exist competing risks, which often occurs in, for example, medical studies. The maximum likelihood estimates of the unknown parameters are derived and their consistency and convergence rate are established. Also we show that the estimates of regression coefficients are efficient and have asymptotically normal distributions. Simulation studies are conducted to assess the finite sample properties of the estimates and an illustrative example is provided. The Canadian Journal of Statistics © 2009 Statistical Society of Canada  相似文献   

18.
In the context of ACD models for ultra-high frequency data different specifications are available to estimate the conditional mean of intertrade durations, while quantiles estimation has been completely neglected by literature, even if to trading extent it can be more informative. The main problem arising with quantiles estimation is the correct specification of durations probability law: the usual assumption of Exponentially distributed residuals, is very robust for the estimation of parameters of the conditional mean, but dramatically fails the distributional fit. In this paper a semiparametric approach is formalized, and compared with the parametric one, deriving from Exponential assumption. Empirical evidence for a stock of Italian financial market strongly supports the former approach.Paola Zuccolotto: The author wishes to thank Prof. A. Mazzali, Dott. G. De Luca, Dott. M. Sandri for valuable comments.  相似文献   

19.
We introduce a new class of distributions called the Weibull Marshall–Olkin-G family. We obtain some of its mathematical properties. The special models of this family provide bathtub-shaped, decreasing-increasing, increasing-decreasing-increasing, decreasing-increasing-decreasing, monotone, unimodal and bimodal hazard functions. The maximum likelihood method is adopted for estimating the model parameters. We assess the performance of the maximum likelihood estimators by means of two simulation studies. We also propose a new family of linear regression models for censored and uncensored data. The flexibility and importance of the proposed models are illustrated by means of three real data sets.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号