首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 119 毫秒
1.
The approach of Bayesian mixed effects modeling is an appropriate method for estimating both population-specific as well as subject-specific times to steady state. In addition to pure estimation, the approach allows to determine the time until a certain fraction of individuals of a population has reached steady state with a pre-specified certainty. In this paper a mixed effects model for the parameters of a nonlinear pharmacokinetic model is used within a Bayesian framework. Model fitting by means of Markov Chain Monte Carlo methods as implemented in the Gibbs sampler as well as the extraction of estimates and probability statements of interest are described. Finally, the proposed approach is illustrated by application to trough data from a multiple dose clinical trial.  相似文献   

2.
《统计学通讯:理论与方法》2012,41(16-17):3278-3300
Under complex survey sampling, in particular when selection probabilities depend on the response variable (informative sampling), the sample and population distributions are different, possibly resulting in selection bias. This article is concerned with this problem by fitting two statistical models, namely: the variance components model (a two-stage model) and the fixed effects model (a single-stage model) for one-way analysis of variance, under complex survey design, for example, two-stage sampling, stratification, and unequal probability of selection, etc. Classical theory underlying the use of the two-stage model involves simple random sampling for each of the two stages. In such cases the model in the sample, after sample selection, is the same as model for the population; before sample selection. When the selection probabilities are related to the values of the response variable, standard estimates of the population model parameters may be severely biased, leading possibly to false inference. The idea behind the approach is to extract the model holding for the sample data as a function of the model in the population and of the first order inclusion probabilities. And then fit the sample model, using analysis of variance, maximum likelihood, and pseudo maximum likelihood methods of estimation. The main feature of the proposed techniques is related to their behavior in terms of the informativeness parameter. We also show that the use of the population model that ignores the informative sampling design, yields biased model fitting.  相似文献   

3.
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models.  相似文献   

4.
We propose a method for estimating parameters in generalized linear models when the outcome variable is missing for some subjects and the missing data mechanism is non-ignorable. We assume throughout that the covariates are fully observed. One possible method for estimating the parameters is maximum likelihood with a non-ignorable missing data model. However, caution must be used when fitting non-ignorable missing data models because certain parameters may be inestimable for some models. Instead of fitting a non-ignorable model, we propose the use of auxiliary information in a likelihood approach to reduce the bias, without having to specify a non-ignorable model. The method is applied to a mental health study.  相似文献   

5.
An exploratory model analysis device we call CDF knotting is introduced. It is a technique we have found useful for exploring relationships between points in the parameter space of a model and global properties of associated distribution functions. It can be used to alert the model builder to a condition we call lack of distinguishability which is to nonlinear models what multicollinearity is to linear models. While there are simple remedial actions to deal with multicollinearity in linear models, techniques such as deleting redundant variables in those models do not have obvious parallels for nonlinear models. In some of these nonlinear situations, however, CDF knotting may lead to alternative models with fewer parameters whose distribution functions are very similar to those of the original overparameterized model. We also show how CDF knotting can be exploited as a mathematical tool for deriving limiting distributions and illustrate the technique for the 3-parameterWeibull family obtaining limiting forms and moment ratios which correct and extend previously published results. Finally, geometric insights obtained by CDF knotting are verified relative to data fitting and estimation.  相似文献   

6.
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model‐based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model‐based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
In this article, dichotomous variables are used to compare between linear and nonlinear Bayesian structural equation models. Gibbs sampling method is applied for estimation and model comparison. Statistical inferences that involve estimation of parameters and their standard deviations and residuals analysis for testing the selected model are discussed. Hidden continuous normal distribution (censored normal distribution) is used to solve the problem of dichotomous variables. The proposed procedure is illustrated by a simulation data obtained from R program. Analyses are done by using R2WinBUGS package in R-program.  相似文献   

8.
An exact maximum likelihood method is developed for the estimation of parameters in a non-Gaussian nonlinear density function that depends on a latent Gaussian dynamic process with long-memory properties. Our method relies on the method of importance sampling and on a linear Gaussian approximating model from which the latent process can be simulated. Given the presence of a latent long-memory process, we require a modification of the importance sampling technique. In particular, the long-memory process needs to be approximated by a finite dynamic linear process. Two possible approximations are discussed and are compared with each other. We show that an autoregression obtained from minimizing mean squared prediction errors leads to an effective and feasible method. In our empirical study, we analyze ten daily log-return series from the S&P 500 stock index by univariate and multivariate long-memory stochastic volatility models. We compare the in-sample and out-of-sample performance of a number of models within the class of long-memory stochastic volatility models.  相似文献   

9.
The author investigates least squares as a method for fitting small-circle models to a sample of unit vectors in R3. He highlights a local linear model underlying the estimation of the parameters of a circle. This model is used to construct an estimation algorithm and regression-type inference procedures for the parameters of a circle. It makes it possible to compare the fit of a small circle with that of a spherical ellipse. The limitations of the least-squares approach are emphasized: when the errors are bounded away from 0, the least-squares estimators are not consistent as the sample size goes to infinity. Two examples, concerned with the migration of elephant seals and with the classification of geological folds, are analyzed using the linear model techniques proposed in this work.  相似文献   

10.
This paper brings together two topics in the estimation of time series forecasting models: the use of the multistep-ahead error sum of squares as a criterion to be minimized and frequency domain methods for carrying out this minimization. The methods are developed for the wide class of time series models having a spectrum which is linear in unknown coefficients. This includes the IMA(1, 1) model for which the common exponentially weigh-ted moving average predictor is optimal, besides more general structural models for series exhibiting trends and seasonality. The method is extended to include the Box–Jenkins `air line' model. The value of the multistep criterion is that it provides protection against using an incorrectly specified model. The value of frequency domain estimation is that the iteratively reweighted least squares scheme for fitting generalized linear models is readily extended to construct the parameter estimates and their standard errors. It also yields insight into the loss of efficiency when the model is correct and the robustness of the criterion against an incorrect model. A simple example is used to illustrate the method, and a real example demonstrates the extension to seasonal models. The discussion considers a diagnostic test statistic for indicating an incorrect model.  相似文献   

11.
This paper considers the effects of informative two-stage cluster sampling on estimation and prediction. The aims of this article are twofold: first to estimate the parameters of the superpopulation model for two-stage cluster sampling from a finite population, when the sampling design for both stages is informative, using maximum likelihood estimation methods based on the sample-likelihood function; secondly to predict the finite population total and to predict the cluster-specific effects and the cluster totals for clusters in the sample and for clusters not in the sample. To achieve this we derive the sample and sample-complement distributions and the moments of the first and second stage measurements. Also we derive the conditional sample and conditional sample-complement distributions and the moments of the cluster-specific effects given the cluster measurements. It should be noted that classical design-based inference that consists of weighting the sample observations by the inverse of sample selection probabilities cannot be applied for the prediction of the cluster-specific effects for clusters not in the sample. Also we give an alternative justification of the Royall [1976. The linear least squares prediction approach to two-stage sampling. Journal of the American Statistical Association 71, 657–664] predictor of the finite population total under two-stage cluster population. Furthermore, small-area models are studied under informative sampling.  相似文献   

12.
Several approaches have been suggested for fitting linear regression models to censored data. These include Cox's propor­tional hazard models based on quasi-likelihoods. Methods of fitting based on least squares and maximum likelihoods have also been proposed. The methods proposed so far all require special purpose optimization routines. We describe an approach here which requires only a modified standard least squares routine.

We present methods for fitting a linear regression model to censored data by least squares and method of maximum likelihood. In the least squares method, the censored values are replaced by their expectations, and the residual sum of squares is minimized. Several variants are suggested in the ways in which the expect­ation is calculated. A parametric (assuming a normal error model) and two non-parametric approaches are described. We also present a method for solving the maximum likelihood equations in the estimation of the regression parameters in the censored regression situation. It is shown that the solutions can be obtained by a recursive algorithm which needs only a least squares routine for optimization. The suggested procesures gain considerably in computational officiency. The Stanford Heart Transplant data is used to illustrate the various methods.  相似文献   

13.
Semiparametric Analysis of Truncated Data   总被引:1,自引:0,他引:1  
Randomly truncated data are frequently encountered in many studies where truncation arises as a result of the sampling design. In the literature, nonparametric and semiparametric methods have been proposed to estimate parameters in one-sample models. This paper considers a semiparametric model and develops an efficient method for the estimation of unknown parameters. The model assumes that K populations have a common probability distribution but the populations are observed subject to different truncation mechanisms. Semiparametric likelihood estimation is studied and the corresponding inferences are derived for both parametric and nonparametric components in the model. The method can also be applied to two-sample problems to test the difference of lifetime distributions. Simulation results and a real data analysis are presented to illustrate the methods.  相似文献   

14.
In the analysis of correlated ordered data, mixed-effect models are frequently used to control the subject heterogeneity effects. A common assumption in fitting these models is the normality of random effects. In many cases, this is unrealistic, making the estimation results unreliable. This paper considers several flexible models for random effects and investigates their properties in the model fitting. We adopt a proportional odds logistic regression model and incorporate the skewed version of the normal, Student's t and slash distributions for the effects. Stochastic representations for various flexible distributions are proposed afterwards based on the mixing strategy approach. This reduces the computational burden being performed by the McMC technique. Furthermore, this paper addresses the identifiability restrictions and suggests a procedure to handle this issue. We analyze a real data set taken from an ophthalmic clinical trial. Model selection is performed by suitable Bayesian model selection criteria.  相似文献   

15.
Optimal sampling strategies which minimise the expected mean square error for a linear design as well as model-design unbiased estimators for a finite population total for two-stage and stratified sampling are obtained under different superpopu1ation models  相似文献   

16.
Outcome-dependent sampling increases the efficiency of studies of rare outcomes, examples being case—control studies in epidemiology and choice–based sampling in econometrics. Two-phase or double sampling is a standard technique for drawing efficient stratified samples. We develop maximum likelihood estimation of logistic regression coefficients for a hybrid two-phase, outcome–dependent sampling design. An algorithm is given for determining the estimates by repeated fitting of ordinary logistic regression models. Simulation results demonstrate the efficiency loss associated with alternative pseudolikelihood and weighted likelihood methods for certain data configurations. These results provide an efficient solution to the measurement error problem with validation sampling based on a discrete surrogate.  相似文献   

17.
This paper presents an EM algorithm for maximum likelihood estimation in generalized linear models with overdispersion. The algorithm is initially derived as a form of Gaussian quadrature assuming a normal mixing distribution, but with only slight variation it can be used for a completely unknown mixing distribution, giving a straightforward method for the fully non-parametric ML estimation of this distribution. This is of value because the ML estimates of the GLM parameters may be sensitive to the specification of a parametric form for the mixing distribution. A listing of a GLIM4 algorithm for fitting the overdispersed binomial logit model is given in an appendix.A simple method is given for obtaining correct standard errors for parameter estimates when using the EM algorithm.Several examples are discussed.  相似文献   

18.
Maximum likelihood is a widely used estimation method in statistics. This method is model dependent and as such is criticized as being non robust. In this article, we consider using weighted likelihood method to make robust inferences for linear mixed models where weights are determined at both the subject level and the observation level. This approach is appropriate for problems where maximum likelihood is the basic fitting technique, but a subset of data points is discrepant with the model. It allows us to reduce the impact of outliers without complicating the basic linear mixed model with normally distributed random effects and errors. The weighted likelihood estimators are shown to be robust and asymptotically normal. Our simulation study demonstrates that the weighted estimates are much better than the unweighted ones when a subset of data points is far away from the rest. Its application to the analysis of deglutition apnea duration in normal swallows shows that the differences between the weighted and unweighted estimates are due to large amount of outliers in the data set.  相似文献   

19.
Diagnostic odds ratio is defined as the ratio of the odds of the positivity of a diagnostic test results in the diseased population relative to that in the non-diseased population. It is a function of sensitivity and specificity, which can be seen as an indicator of the diagnostic accuracy for the evaluation of a biomarker/test. The naïve estimator of diagnostic odds ratio fails when either sensitivity or specificity is close to one, which leads the denominator of diagnostic odds ratio equal to zero. We propose several methods to adjust for such situation. Agresti and Coull’s adjustment is a common and straightforward way for extreme binomial proportions. Alternatively, estimation methods based on a more advanced sampling design can be applied, which systematically selects samples from underlying population based on judgment ranks. Under such design, the odds can be estimated by the sum of indicator functions and thus avoid the situation of dividing by zero and provide a valid estimation. The asymptotic mean and variance of the proposed estimators are derived. All methods are readily applied for the confidence interval estimation and hypothesis testing for diagnostic odds ratio. A simulation study is conducted to compare the efficiency of the proposed methods. Finally, the proposed methods are illustrated using a real dataset.  相似文献   

20.
Model summaries based on the ratio of fitted and null likelihoods have been proposed for generalised linear models, reducing to the familiar R2 coefficient of determination in the Gaussian model with identity link. In this note I show how to define the Cox–Snell and Nagelkerke summaries under arbitrary probability sampling designs, giving a design‐consistent estimator of the population model summary. It is also shown that for logistic regression models under case–control sampling the usual Cox–Snell and Nagelkerke R2 are not design‐consistent, but are systematically larger than would be obtained with a cross‐sectional or cohort sample from the same population, even in settings where the weighted and unweighted logistic regression estimators are similar or identical. Implementation of the new estimators is straightforward and code is provided in R.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号