首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Summary.  In a modern computer-based forest harvester, tree stems are run in sequence through the measuring equipment root end first, and simultaneously the length and diameter are stored in a computer. These measurements may be utilized for example in the determination of the optimal cutting points of the stems. However, a problem that is often passed over is that these variables are usually measured with error. We consider estimation and prediction of stem curves when the length and diameter measurements are subject to errors. It is shown that only in the simplest case of a first-order model can the estimation be carried out unbiasedly by using standard least squares procedures. However, both the first- and the second-degree models are unbiased in prediction. Also a study on real stem is used to illustrate the models that are discussed.  相似文献   

2.
A biomedical application of latent class models with random effects   总被引:3,自引:0,他引:3  
Traditional latent class modelling has been used in many biomedical settings. Unfortunately, many of these applications assume that the diagnostic tests are independent given the true disease status, an assumption that is often violated in practice. Qu, Tan and Kutner developed general latent class models with random effects to model the conditional dependence among multiple diagnostic tests. In this paper latent class modelling with random effects is used to estimate the sensitivity and specificity of six screening tests for detecting Chlamydia trachomatis in endocervical specimens from women attending family planning clinics.  相似文献   

3.
In an attempt to provide a statistical tool for disease screening and prediction, we propose a semiparametric approach to analysis of the Cox proportional hazards cure model in situations where the observations on the event time are subject to right censoring and some covariates are missing not at random. To facilitate the methodological development, we begin with semiparametric maximum likelihood estimation (SPMLE) assuming that the (conditional) distribution of the missing covariates is known. A variant of the EM algorithm is used to compute the estimator. We then adapt the SPMLE to a more practical situation where the distribution is unknown and there is a consistent estimator based on available information. We establish the consistency and weak convergence of the resulting pseudo-SPMLE, and identify a suitable variance estimator. The application of our inference procedure to disease screening and prediction is illustrated via empirical studies. The proposed approach is used to analyze the tuberculosis screening study data that motivated this research. Its finite-sample performance is examined by simulation.  相似文献   

4.
We obtained banding and recovery data from the Bird Banding Laboratory (operated by the Biological Resources Division of the US Geological Survey) for adults from 129 avian species that had been continuously banded for > 24 years. Data were partitioned by gender, banding period (winter versus summer), and by states/provinces. Data sets were initially screened for adequacy based on specific criteria (e.g. minimum sample sizes). Fifty-nine data sets (11 waterfowl species, the Mourning Dove and Common Grackle) met our criteria of adequacy for further analysis. We estimated annual survival probabilities using the Brownie et al. recovery model {St, ft} in program MARK. Trends in annual survival and temporal process variation were estimated using random effects models based on shrinkage estimators. Waterfowl species had relatively little variation in annual survival probabilities (mean CV = 8.7% and 10% for males and females, respectively). The limited data for other species suggested similar low temporal variation for males, but higher temporal variation for females (CV = 40%). Evidence for long-term trends varied by species, banding period and sex, with no obvious spatial patterns for either positive or negative trends in survival probabilities. An exception was Mourning Doves banded in Illinois/Missouri and Arizona/New Mexico where both males (slope = -0.0122, se = 0.0019 and females (slope = -0.0109 to -0.0128, se = 0.0018 -0.0032) exhibited declining trends in survival probabilities. We believe our approach has application for large-scale monitoring. However, meaningful banding and recovery data for species other than waterfowl is very limited in North America.  相似文献   

5.
The Dirichlet process has been used extensively in Bayesian non parametric modeling, and has proven to be very useful. In particular, mixed models with Dirichlet process random effects have been used in modeling many types of data and can often outperform their normal random effect counterparts. Here we examine the linear mixed model with Dirichlet process random effects from a classical view, and derive the best linear unbiased estimator (BLUE) of the fixed effects. We are also able to calculate the resulting covariance matrix and find that the covariance is directly related to the precision parameter of the Dirichlet process, giving a new interpretation of this parameter. We also characterize the relationship between the BLUE and the ordinary least-squares (OLS) estimator and show how confidence intervals can be approximated.  相似文献   

6.
7.
8.
Absolute risk is the chance that a person with given risk factors and free of the disease of interest at age a will be diagnosed with that disease in the interval (a, a + τ]. Absolute risk is sometimes called cumulative incidence. Absolute risk is a “crude” risk because it is reduced by the chance that the person will die of competing causes of death before developing the disease of interest. Cohort studies admit flexibility in modeling absolute risk, either by allowing covariates to affect the cause-specific relative hazards or to affect the absolute risk itself. An advantage of cause-specific relative risk models is that various data sources can be used to fit the required components. For example, case–control data can be used to estimate relative risk and attributable risk, and these can be combined with registry data on age-specific composite hazard rates for the disease of interest and with national data on competing hazards of mortality to estimate absolute risk. Family-based designs, such as the kin-cohort design and collections of pedigrees with multiple affected individuals can be used to estimate the genotype-specific hazard of disease. Such analyses must be adjusted for ascertainment, and failure to take into account residual familial risk, such as might be induced by unmeasured genetic variants or by unmeasured behavioral or environmental exposures that are correlated within families, can lead to overestimates of mutation-specific absolute risk in the general population.  相似文献   

9.
Evidence suggests that the increasing life expectancy levels at birth witnessed over the past centuries are associated with a decreasing concentration of the survival times. The purpose of this work is to study the relationships that exist between longevity and concentration measures for some regression models for the evolution of survival. In particular, we study a family of survival models that can be used to capture the observed trends in longevity and concentration over time. The parametric family of log-scale-location models is shown to allow for modeling different trends of expected value and concentration of survival times. An extension towards mixture models is also described in order to take into account scenarios where a fraction of the population experiences short term survival. Some results are also presented for such framework. The use of both the log-scale-location family and the mixture model is illustrated through an application to period life tables from the Human Mortality Database.  相似文献   

10.
According to the Atlas of Human Development in Brazil, the income dimension of Municipal Human Development Index (MHDI-I) is an indicator that shows the population''s ability in a municipality to ensure a minimum standard of living to provide their basic needs, such as water, food and shelter. In public policy, one of the research objectives is to identify social and economic variables that are associated with this index. Due to the income inequality, evaluate these associations in quantiles, instead of the mean, could be more interest. Thus, in this paper, we develop a Bayesian variable selection in quantile regression models with hierarchical random effects. In particular, we assume a likelihood function based on the Generalized Asymmetric Laplace distribution, and a spike-and-slab prior is used to perform variable selection. The Generalized Asymmetric Laplace distribution is a more general alternative than the Asymmetric Laplace one, which is a common approach used in quantile regression under the Bayesian paradigm. The performance of the proposed method is evaluated via a comprehensive simulation study, and it is applied to the MHDI-I from municipalities located in the state of Rio de Janeiro.  相似文献   

11.
Summary.  As biological knowledge accumulates rapidly, gene networks encoding genomewide gene–gene interactions have been constructed. As an improvement over the standard mixture model that tests all the genes identically and independently distributed a priori , Wei and co-workers have proposed modelling a gene network as a discrete or Gaussian Markov random field (MRF) in a mixture model to analyse genomic data. However, how these methods compare in practical applications is not well understood and this is the aim here. We also propose two novel constraints in prior specifications for the Gaussian MRF model and a fully Bayesian approach to the discrete MRF model. We assess the accuracy of estimating the false discovery rate by posterior probabilities in the context of MRF models. Applications to a chromatin immuno-precipitation–chip data set and simulated data show that the modified Gaussian MRF models have superior performance compared with other models, and both MRF-based mixture models, with reasonable robustness to misspecified gene networks, outperform the standard mixture model.  相似文献   

12.
ABSTRACT

The maximum likelihood and Bayesian approaches for estimating the parameters and the prediction of future record values for the Kumaraswamy distribution has been considered when the lower record values along with the number of observations following the record values (inter-record-times) have been observed. The Bayes estimates are obtained based on a joint bivariate prior for the shape parameters. In this case, Bayes estimates of the parameters have been developed by using Lindley's approximation and the Markov Chain Monte Carlo (MCMC) method due to the lack of explicit forms under the squared error and the linear-exponential loss functions. The MCMC method has been also used to construct the highest posterior density credible intervals. The Bayes and the maximum likelihood estimates are compared by using the estimated risk through Monte Carlo simulations. We further consider the non-Bayesian and Bayesian prediction for future lower record values arising from the Kumaraswamy distribution based on record values with their corresponding inter-record times and only record values. The comparison of the derived predictors are carried out by using Monte Carlo simulations. Real data are analysed for an illustration of the findings.  相似文献   

13.
In many relevant situations, such as in medical research, sample sizes may not be previously known. The aim of this paper is to extend one and more than one-way analysis of variance to those situations and show how to compute correct critical values. The interest of this approach lies in avoiding false rejections obtained when using the classical fixed size F-tests. Sample sizes are assumed as random and we then proceed with the application of this approach to a database on cancer.  相似文献   

14.
Meta-analysis is formulated as a special case of a multilevel (hierarchical data) model in which the highest level is that of the study and the lowest level that of an observation on an individual respondent. Studies can be combined within a single model where the responses occur at different levels of the data hierarchy and efficient estimates are obtained. An example is given from studies of class sizes and achievement in schools, where study data are available at the aggregate level in terms of overall mean values for classes of different sizes, and also at the student level.  相似文献   

15.
Statistical models are sometimes incorporated into computer software for making predictions about future observations. When the computer model consists of a single statistical model this corresponds to estimation of a function of the model parameters. This paper is concerned with the case that the computer model implements multiple, individually-estimated statistical sub-models. This case frequently arises, for example, in models for medical decision making that derive parameter information from multiple clinical studies. We develop a method for calculating the posterior mean of a function of the parameter vectors of multiple statistical models that is easy to implement in computer software, has high asymptotic accuracy, and has a computational cost linear in the total number of model parameters. The formula is then used to derive a general result about posterior estimation across multiple models. The utility of the results is illustrated by application to clinical software that estimates the risk of fatal coronary disease in people with diabetes.  相似文献   

16.
We propose an easy to derive and simple to compute approximate least squares or maximum likelihood estimator for nonlinear errors-in-variables models that does not require the knowledge of the conditional density of the latent variables given the observables. Specific examples and Monte Carlo studies demonstrate that the bias of this approximate estimator is small even when the magnitude of the variance of measurement errors to the variance of measured covariates is large. Cheng Hsiao and Qing Wang's work was supported in part by National Science Foundation grant SeS91-22481 and SBR94-09540. Liqun Wang gratefully acknowledges the financial support from Swiss National Science Foundation. We wish to thank Professor H. Schneeweiss and a referee for helpful comments and suggestions.  相似文献   

17.
Abstract

In this paper, we discuss how to model the mean and covariancestructures in linear mixed models (LMMs) simultaneously. We propose a data-driven method to modelcovariance structures of the random effects and random errors in the LMMs. Parameter estimation in the mean and covariances is considered by using EM algorithm, and standard errors of the parameter estimates are calculated through Louis’ (1982 Louis, T.A. (1982). Finding observed information using the EM algorithm. J. Royal Stat. Soc. B 44:98130. [Google Scholar]) information principle. Kenward’s (1987 Kenward, M.G. (1987). A method for comparing profiles of repeated measurements. Appl. Stat. 36:296308.[Crossref], [Web of Science ®] [Google Scholar]) cattle data sets are analyzed for illustration,and comparison to the literature work is made through simulation studies. Our numerical analysis confirms the superiority of the proposed method to existing approaches in terms of Akaike information criterion.  相似文献   

18.
19.
We propose a state-space approach for GARCH models with time-varying parameters able to deal with non-stationarity that is usually observed in a wide variety of time series. The parameters of the non-stationary model are allowed to vary smoothly over time through non-negative deterministic functions. We implement the estimation of the time-varying parameters in the time domain through Kalman filter recursive equations, finding a state-space representation of a class of time-varying GARCH models. We provide prediction intervals for time-varying GARCH models and, additionally, we propose a simple methodology for handling missing values. Finally, the proposed methodology is applied to the Chilean Stock Market (IPSA) and to the American Standard&Poor's 500 index (S&P500).  相似文献   

20.
In this paper we consider the long-run availability of a parallel system having several independent renewable components with exponentially distributed failure and repair times. We are interested in testing availability of the system or constructing a lower confidence bound for the availability by using component test data. For this problem, there is no exact test or confidence bound available and only approximate methods are available in the literature. Using the generalized p-value approach, an exact test and a generalized confidence interval are given. An example is given to illustrate the proposed procedures. A simulation study is given to demonstrate their advantages over the other available approximate procedures. Based on type I and type II error rates, the simulation study shows that the generalized procedures outperform the other available methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号