首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
The relationship between body weight and mortality is examined using U.S. data from the National Health and Nutrition Examination Survey I (NHANES I) Epidemiologic Follow-up Study concerning 13,242 individuals, the emphasis being on identifying the body mass index associated with the lowest levels of mortality. Factors such as smoking status, sex, race, and age are taken into consideration. The results suggest that only the interaction between race and body mass index is significant.  相似文献   

2.
Despite advances in public health practice and medical technology, the disparities in health among the various racial/ethnic and socioeconomic groups remain a concern which has prompted the Department of Human and Health Services to designate the elimination of disparities in health as an overarching goal of Healthy People 2010. To assess the progress towards this goal, suitable measures are needed at the population level that can be tracked over time; Statistical inferential procedures have to be developed for these population level measures; and the data sources have to be identified to allow for such inferences to be conducted. Popular data sources for health disparities research are large surveys such the National Health and Interview Survey (NHIS) or the Behavior Risk Factor Surveillance System (BRFSS). The self-report disease status collected in these surveys may be inaccurate and the errors may be correlated with variables used in defining the groups. This article uses the National Health and Nutritional Examination Survey (NHANES) 99-00 to assess the extent of error in the self-report disease status; uses a Bayesian framework develop corrections for the self-report disease status in the National Health Interview Survey (NHIS) 99-00; and compares inferences about various measures of health disparities, with and without correcting for measurement error. The methodology is illustrated using the disease outcome hypertension, a common risk factor for cardiovascular disease. JEL classification C1 (C11, C13, C15), C4 (C42) and I3 (I31, I38)  相似文献   

3.
When the finite population ‘totals’ are estimated for individual areas, they do not necessarily add up to the known ‘total’ for all areas. Benchmarking (BM) is a technique used to ensure that the totals for all areas match the grand total, which can be obtained from an independent source. BM is desirable to practitioners of survey sampling. BM shifts the small-area estimators to accommodate the constraint. In doing so, it can provide increased precision to the small-area estimators of the finite population means or totals. The Scott–Smith model is used to benchmark the finite population means of small areas. This is a one-way random effects model for a superpopulation, and it is computationally convenient to use a Bayesian approach. We illustrate our method by estimating body mass index using data in the third National Health and Nutrition Examination Survey. Several properties of the benchmarked small-area estimators are obtained using a simulation study.  相似文献   

4.
Research concerning hospital readmissions has mostly focused on statistical and machine learning models that attempt to predict this unfortunate outcome for individual patients. These models are useful in certain settings, but their performance in many cases is insufficient for implementation in practice, and the dynamics of how readmission risk changes over time is often ignored. Our objective is to develop a model for aggregated readmission risk over time – using a continuous-time Markov chain – beginning at the point of discharge. We derive point and interval estimators for readmission risk, and find the asymptotic distributions for these probabilities. Finally, we validate our derived estimators using simulation, and apply our methods to estimate readmission risk over time using discharge and readmission data for surgical patients.  相似文献   

5.
Despite advances in public health practice and medical technology, the disparities in health among the various racial/ethnic and socioeconomic groups remain a concern which has prompted the Department of Human and Health Services to designate the elimination of disparities in health as an overarching goal of Healthy People 2010. To assess the progress towards this goal, suitable measures are needed at the population level that can be tracked over time; Statistical inferential procedures have to be developed for these population level measures; and the data sources have to be identified to allow for such inferences to be conducted. Popular data sources for health disparities research are large surveys such the National Health and Interview Survey (NHIS) or the Behavior Risk Factor Surveillance System (BRFSS). The self-report disease status collected in these surveys may be inaccurate and the errors may be correlated with variables used in defining the groups. This article uses the National Health and Nutritional Examination Survey (NHANES) 99-00 to assess the extent of error in the self-report disease status; uses a Bayesian framework develop corrections for the self-report disease status in the National Health Interview Survey (NHIS) 99-00; and compares inferences about various measures of health disparities, with and without correcting for measurement error. The methodology is illustrated using the disease outcome hypertension, a common risk factor for cardiovascular disease.  相似文献   

6.
Unweighted estimators using data collected in a sample survey can be badly biased, whereas weighted estimators are approximately unbiased for population parameters. We present four examples using data from the 1988 National Maternal and Infant Health Survey to demonstrate that weighted and unweighted estimators can be quite different, and to show the underlying causes of such differences.  相似文献   

7.
In this article, we present the analysis of head and neck cancer data using generalized inverse Lindley stress–strength reliability model. We propose Bayes estimators for estimating P(X > Y), when X and Y represent survival times of two groups of cancer patients observed under different therapies. The X and Y are assumed to be independent generalized inverse Lindley random variables with common shape parameter. Bayes estimators are obtained under the considerations of symmetric and asymmetric loss functions assuming independent gamma priors. Since posterior becomes complex and does not possess closed form expressions for Bayes estimators, Lindley’s approximation and Markov Chain Monte Carlo techniques are utilized for Bayesian computation. An extensive simulation experiment is carried out to compare the performances of Bayes estimators with the maximum likelihood estimators on the basis of simulated risks. Asymptotic, bootstrap, and Bayesian credible intervals are also computed for the P(X > Y).  相似文献   

8.
ABSTRACT

In this paper, we shall study a homogeneous ergodic, finite state, Markov chain with unknown transition probability matrix. Starting from the well known maximum likelihood estimator of transition probability matrix, we define estimators of reliability and its measurements. Our aim is to show that these estimators are uniformly strongly consistent and converge in distribution to normal random variables. The construction of the confidence intervals for availability, reliability, and failure rates are also given. Finally we shall give a numerical example for illustration and comparing our results with the usual empirical estimator results.  相似文献   

9.
In this paper, the statistical inference of the unknown parameters of a Burr Type III (BIII) distribution based on the unified hybrid censored sample is studied. The maximum likelihood estimators of the unknown parameters are obtained using the Expectation–Maximization algorithm. It is observed that the Bayes estimators cannot be obtained in explicit forms, hence Lindley's approximation and the Markov Chain Monte Carlo (MCMC) technique are used to compute the Bayes estimators. Further the highest posterior density credible intervals of the unknown parameters based on the MCMC samples are provided. The new model selection test is developed in discriminating between two competing models under unified hybrid censoring scheme. Finally, the potentiality of the BIII distribution to analyze the real data is illustrated by using the fracture toughness data of the three different materials namely silicon nitride (Si3N4), Zirconium dioxide (ZrO2) and sialon (Si6?xAlxOxN8?x). It is observed that for the present data sets, the BIII distribution has the better fit than the Weibull distribution which is frequently used in the fracture toughness data analysis.  相似文献   

10.
The Bayesian estimation for the parameters of the finite mixture of the Burr type XII distribution with its reciprocal are obtained based on the type-I censored data. The Bayes estimators are computed based on squared error and Linex loss functions and using the idea of Markov chain Monte Carlo algorithm. Based on the Monte Carlo simulation, Bayes estimators are compared with their corresponding maximum-likelihood estimators.  相似文献   

11.
In recent years, there has been an increased interest in combining probability and nonprobability samples. Nonprobability sample are cheaper and quicker to conduct but the resulting estimators are vulnerable to bias as the participation probabilities are unknown. To adjust for the potential bias, estimation procedures based on parametric or nonparametric models have been discussed in the literature. However, the validity of the resulting estimators relies heavily on the validity of the underlying models. Also, nonparametric approaches may suffer from the curse of dimensionality and poor efficiency. We propose a data integration approach by combining multiple outcome regression models and propensity score models. The proposed approach can be used for estimating general parameters including totals, means, distribution functions, and percentiles. The resulting estimators are multiply robust in the sense that they remain consistent if all but one model are misspecified. The asymptotic properties of point and variance estimators are established. The results from a simulation study show the benefits of the proposed method in terms of bias and efficiency. Finally, we apply the proposed method using data from the Korea National Health and Nutrition Examination Survey and data from the National Health Insurance Sharing Services.  相似文献   

12.
We present sufficient conditions for the absolute continuity of Markov chains, which extend the work of Dion and Ferland (1995). We allow the state space to be countable, and apply our results to some examples which include Polya’s urn. In a statistical problem, the conditions are sufficient for the non–existence of consistent estimators of the distribution generating the data.  相似文献   

13.
Consider a process that jumps among a finite set of states, with random times spent in between. In semi-Markov processes transitions follow a Markov chain and the sojourn distributions depend only on the connecting states. Suppose that the process started far in the past, achieving stationary. We consider non-parametric estimation by modelling the log-hazard of the sojourn times through linear splines; and we obtain maximum penalized likelihood estimators when data consist of several i.i.d. windows. We prove consistency using Grenander's method of sieves.  相似文献   

14.
A natural way to deal with the uncertainty of an ergodic finite state space Markov process is to investigate the entropy of its stationary distribution. When the process is observed, it becomes necessary to estimate this entropy.We estimate both the stationary distribution and its entropy by plug-in of the estimators of the infinitesimal generator. Three situations of observation are discussed: one long trajectory is observed, several independent short trajectories are observed, or the process is observed at discrete times. The good asymptotic behavior of the plug-in estimators is established. We also illustrate the behavior of the estimators through simulation.  相似文献   

15.
Survival data with one intermediate state are described by semi-Markov and Markov models for counting processes whose intensities are defined in terms of two stopping times T 1< T 2. Problems of goodness-of-fit for these models are studied. The test statistics are proposed by comparing Nelson–Aalen estimators for data stratified according to T 1. Asymptotic distributions of these statistics are established in terms of the weak convergence of some random fields. Asymptotic consistency of these test statistics is also established. Simulation studies are included to indicate their numerical performance.  相似文献   

16.
We consider the problem of estimating the rate matrix governing a finite-state Markov jump process given a number of fragmented time series. We propose to concatenate the observed series and to employ the emerging non-Markov process for estimation. We describe the bias arising if standard methods for Markov processes are used for the concatenated process, and provide a post-processing method to correct for this bias. This method applies to discrete-time Markov chains and to more general models based on Markov jump processes where the underlying state process is not observed directly. This is demonstrated in detail for a Markov switching model. We provide applications to simulated time series and to financial market data, where estimators resulting from maximum likelihood methods and Markov chain Monte Carlo sampling are improved using the presented correction.  相似文献   

17.
Accurate and efficient methods to detect unusual clusters of abnormal activity are needed in many fields such as medicine and business. Often the size of clusters is unknown; hence, multiple (variable) window scan statistics are used to identify clusters using a set of different potential cluster sizes. We give an efficient method to compute the exact distribution of multiple window discrete scan statistics for higher-order, multi-state Markovian sequences. We define a Markov chain to efficiently keep track of probabilities needed to compute p-values for the statistic. The state space of the Markov chain is set up by a criterion developed to identify strings that are associated with observing the specified values of the statistic. Using our algorithm, we identify cases where the available approximations do not perform well. We demonstrate our methods by detecting unusual clusters of made free throw shots by National Basketball Association players during the 2009–2010 regular season.  相似文献   

18.
The maximum likelihood and Bayesian approaches have been considered for the two-parameter generalized exponential distribution based on record values with the number of trials following the record values (inter-record times). The maximum likelihood estimates are obtained under the inverse sampling and the random sampling schemes. It is shown that the maximum likelihood estimator of the shape parameter converges in mean square to the true value when the scale parameter is known. The Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo methods due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The confidence intervals for the parameters are constructed based on asymptotic and Bayesian methods. The Bayes and the maximum likelihood estimators are compared in terms of the estimated risk by the Monte Carlo simulations. The comparison of the estimators based on the record values and the record values with their corresponding inter-record times are performed by using Monte Carlo simulations.  相似文献   

19.
For some discrete state series, such as DNA sequences, it can often be postulated that its probabilistic behaviour is given by a Markov chain. For making the decision on whether or not an uncharacterized piece of DNA is part of the coding region of a gene, under the Markovian assumption, there are two statistical tools that are essential to be considered: the hypothesis testing of the order in a Markov chain and the estimators of transition probabilities. In order to improve the traditional statistical procedures for both of them when stationarity assumption can be considered, a new version for understanding the homogeneity hypothesis is proposed so that log-linear modelling is applied for conditional independence jointly with homogeneity restrictions on the expected means of transition counts in the sequence. In addition we can consider a variety of test-statistics and estimators by using φ-divergence measures. As special case of them the well-known likelihood ratio test-statistics and maximum-likelihood estimators are obtained.  相似文献   

20.
ABSTRACT

We develop Markov chain Monte Carlo algorithms for estimating the parameters of the short-term interest rate model. Using Monte Carlo experiments we compare the Bayes estimators with the maximum likelihood and generalized method of moments estimators. We estimate the model using the Japanese overnight call rate data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号