首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Abstract. We consider a bidimensional Ornstein–Uhlenbeck process to describe the tissue microvascularization in anti‐cancer therapy. Data are discrete, partial and noisy observations of this stochastic differential equation (SDE). Our aim is to estimate the SDE parameters. We use the main advantage of a one‐dimensional observation to obtain an easy way to compute the exact likelihood using the Kalman filter recursion, which allows to implement an easy numerical maximization of the likelihood. Furthermore, we establish the link between the observations and an ARMA process and we deduce the asymptotic properties of the maximum likelihood estimator. We show that this ARMA property can be generalized to a higher dimensional underlying Ornstein–Uhlenbeck diffusion. We compare this estimator with the one obtained by the well‐known expectation maximization algorithm on simulated data. Our estimation methods can be directly applied to other biological contexts such as drug pharmacokinetics or hormone secretions.  相似文献   

2.
We consider the problem of parameter estimation in the case of observation of the trajectory of the diffusion process. We suppose that the drift coefficient has a singularity of cusp type and that the unknown parameter corresponds to the position of the point of the cusp. The asymptotic properties of the maximum likelihood estimator and Bayesian estimators are described in the asymptotic of small noise, that is, as the diffusion coefficient tends to zero. The consistency, limit distributions, and the convergence of moments of these estimators are established.  相似文献   

3.
We consider fitting the so‐called Emax model to continuous response data from clinical trials designed to investigate the dose–response relationship for an experimental compound. When there is insufficient information in the data to estimate all of the parameters because of the high dose asymptote being ill defined, maximum likelihood estimation fails to converge. We explore the use of either bootstrap resampling or the profile likelihood to make inferences about effects and doses required to give a particular effect, using limits on the parameter values to obtain the value of the maximum likelihood when the high dose asymptote is ill defined. The results obtained show these approaches to be comparable with or better than some others that have been used when maximum likelihood estimation fails to converge and that the profile likelihood method outperforms the method of bootstrap resampling used. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
Abstract.  Stochastic differential equations have been shown useful in describing random continuous time processes. Biomedical experiments often imply repeated measurements on a series of experimental units and differences between units can be represented by incorporating random effects into the model. When both system noise and random effects are considered, stochastic differential mixed-effects models ensue. This class of models enables the simultaneous representation of randomness in the dynamics of the phenomena being considered and variability between experimental units, thus providing a powerful modelling tool with immediate applications in biomedicine and pharmacokinetic/pharmacodynamic studies. In most cases the likelihood function is not available, and thus maximum likelihood estimation of the unknown parameters is not possible. Here we propose a computationally fast approximated maximum likelihood procedure for the estimation of the non-random parameters and the random effects. The method is evaluated on simulations from some famous diffusion processes and on real data sets.  相似文献   

5.
Abstract

This paper investigates the first-order random coefficient integer valued autoregressive process with the occasional level shift random noise based on dual empirical likelihood. The limiting distribution of log empirical likelihood ratio statistic is constructed. Asymptotic convergence and confidence region results of empirical likelihood ratio are given. Hypothesis testing is considering, and maximum empirical likelihood estimation for parameter is acquired. Simulations are given to show that the maximum empirical likelihood estimation is more efficient than the conditional least squares estimation.  相似文献   

6.
We conducted confirmatory factor analysis (CFA) of responses (N=803) to a self‐reported measure of optimism, using full‐information estimation via adaptive quadrature (AQ), an alternative estimation method for ordinal data. We evaluated AQ results in terms of the number of iterations required to achieve convergence, model fit, parameter estimates, standard errors (SE), and statistical significance, across four link‐functions (logit, probit, log‐log, complimentary log‐log) using 3–10 and 20 quadrature points. We compared AQ results with those obtained using maximum likelihood, robust maximum likelihood, and robust diagonally weighted least‐squares estimation. Compared to the other two link‐functions, logit and probit not only produced fit statistics, parameters estimates, SEs, and levels of significance that varied less across numbers of quadrature points, but also fitted the data better and provided larger completely standardised loadings than did maximum likelihood and diagonally weighted least‐squares. Our findings demonstrate the viability of using full‐information AQ to estimate CFA models with real‐world ordinal data.  相似文献   

7.
The paper considers non-parametric maximum likelihood estimation of the failure time distribution for interval-censored data subject to misclassification. Such data can arise from two types of observation scheme; either where observations continue until the first positive test result or where tests continue regardless of the test results. In the former case, the misclassification probabilities must be known, whereas in the latter case, joint estimation of the event-time distribution and misclassification probabilities is possible. The regions for which the maximum likelihood estimate can only have support are derived. Algorithms for computing the maximum likelihood estimate are investigated and it is shown that algorithms appropriate for computing non-parametric mixing distributions perform better than an iterative convex minorant algorithm in terms of time to absolute convergence. A profile likelihood approach is proposed for joint estimation. The methods are illustrated on a data set relating to the onset of cardiac allograft vasculopathy in post-heart-transplantation patients.  相似文献   

8.
The parameter estimation problem for a Markov jump process sampled at equidistant time points is considered here. Unlike the diffusion case where a closed form of the likelihood function is usually unavailable, here an explicit expansion of the likelihood function of the sampled chain is provided. Under suitable ergodicity conditions on the jump process, the consistency and the asymptotic normality of the likelihood estimator are established as the observation period tends to infinity. Simulation experiments are conducted to demonstrate the computational facility of the method.  相似文献   

9.
We examine the issue of asymptotic efficiency of estimation for response adaptive designs of clinical trials, from which the collected data set contains a dependency structure. We establish the asymptotic lower bound of exponential rates for consistent estimators. Under certain regularity conditions, we show that the maximum likelihood estimator achieves the asymptotic lower bound for response adaptive trials with dichotomous responses. Furthermore, it is shown that the maximum likelihood estimator of the treatment effect is asymptotically efficient in the Bahadur sense for response adaptive clinical trials.  相似文献   

10.
In this article, we study moderate deviation for parameter estimation in the Rayleigh diffusion process and obtain the moderate deviation principle of the maximum likelihood estimator with explicit rate function.  相似文献   

11.
ABSTRACT. In this paper we consider logspline density estimation for random variables which are contaminated with random noise. In the logspline density estimation for data without noise, the logarithm of an unknown density function is estimated by a polynomial spline, the unknown parameters of which are given by maximum likelihood. When noise is present, B-splines and the Fourier inversion formula are used to construct the logspline density estimator of the unknown density function. Rates of convergence are established when the log-density function is assumed to be in a Besov space. It is shown that convergence rates depend on the smoothness of the density function and the decay rate of the characteristic function of the noise. Simulated data are used to show the finite-sample performance of inference based on the logspline density estimation.  相似文献   

12.
In this paper we address the problem of estimating a vector of regression parameters in the Weibull censored regression model. Our main objective is to provide natural adaptive estimators that significantly improve upon the classical procedures in the situation where some of the predictors may or may not be associated with the response. In the context of two competing Weibull censored regression models (full model and candidate submodel), we consider an adaptive shrinkage estimation strategy that shrinks the full model maximum likelihood estimate in the direction of the submodel maximum likelihood estimate. We develop the properties of these estimators using the notion of asymptotic distributional risk. The shrinkage estimators are shown to have higher efficiency than the classical estimators for a wide class of models. Further, we consider a LASSO type estimation strategy and compare the relative performance with the shrinkage estimators. Monte Carlo simulations reveal that when the true model is close to the candidate submodel, the shrinkage strategy performs better than the LASSO strategy when, and only when, there are many inactive predictors in the model. Shrinkage and LASSO strategies are applied to a real data set from Veteran's administration (VA) lung cancer study to illustrate the usefulness of the procedures in practice.  相似文献   

13.
The paper presents an overview of maximum likelihood estimation using simulated likelihood, including the use of antithetic variables and evaluation of the simulation error of the resulting estimates. It gives a general purpose implementation of simulated maximum likelihood and uses it to re‐visit four models that have previously appeared in the published literature: a state–space model for count data; a nested random effects model for binomial data; a nonlinear growth model with crossed random effects; and a crossed random effects model for binary salamander‐mating data. In the case of the last three examples, this appears to be the first time that maximum likelihood fits of these models have been presented.  相似文献   

14.
Mixture cure models are widely used when a proportion of patients are cured. The proportional hazards mixture cure model and the accelerated failure time mixture cure model are the most popular models in practice. Usually the expectation–maximisation (EM) algorithm is applied to both models for parameter estimation. Bootstrap methods are used for variance estimation. In this paper we propose a smooth semi‐nonparametric (SNP) approach in which maximum likelihood is applied directly to mixture cure models for parameter estimation. The variance can be estimated by the inverse of the second derivative of the SNP likelihood. A comprehensive simulation study indicates good performance of the proposed method. We investigate stage effects in breast cancer by applying the proposed method to breast cancer data from the South Carolina Cancer Registry.  相似文献   

15.
This paper demonstrates that well-known parameter estimation methods for Gaussian fields place different emphasis on the high and low frequency components of the data. As a consequence, the relative importance of the frequencies under the objective of the analysis should be taken into account when selecting an estimation method, in addition to other considerations such as statistical and computational efficiency. The paper also shows that when noise is added to the Gaussian field, maximum pseudolikelihood automatically sets the smoothing parameter of the model equal to one. A simulation study then indicates that generalised cross-validation is more robust than maximum likelihood un-

der model misspecification in smoothing and image restoration problems. This has implications for Bayesian procedures since these use the same weightings of the frequencies as the likelihood.  相似文献   

16.
Abstract. Parameter estimation in diffusion processes from discrete observations up to a first‐passage time is clearly of practical relevance, but does not seem to have been studied so far. In neuroscience, many models for the membrane potential evolution involve the presence of an upper threshold. Data are modelled as discretely observed diffusions which are killed when the threshold is reached. Statistical inference is often based on a misspecified likelihood ignoring the presence of the threshold causing severe bias, e.g. the bias incurred in the drift parameters of the Ornstein–Uhlenbeck model for biological relevant parameters can be up to 25–100 per cent. We compute or approximate the likelihood function of the killed process. When estimating from a single trajectory, considerable bias may still be present, and the distribution of the estimates can be heavily skewed and with a huge variance. Parametric bootstrap is effective in correcting the bias. Standard asymptotic results do not apply, but consistency and asymptotic normality may be recovered when multiple trajectories are observed, if the mean first‐passage time through the threshold is finite. Numerical examples illustrate the results and an experimental data set of intracellular recordings of the membrane potential of a motoneuron is analysed.  相似文献   

17.
A weighted area estimation technique which allows explicit estimation of the parameters of the growth curve lt= L∞[1-exp(-K(t-t0))] is introduced. Simulated data are used to compare an optimal weighted area method with that of maximum likelihood. Length at age data from fisheries field studies are used to compare the weighted area and maximum likelihood techniques with the “graphical” method of Ford-Walford. The weighted area methods achieve 80–90% efficiency on simulated data, and are shown to provide robust and sensible estimates on field data.  相似文献   

18.
In the expectation–maximization (EM) algorithm for maximum likelihood estimation from incomplete data, Markov chain Monte Carlo (MCMC) methods have been used in change-point inference for a long time when the expectation step is intractable. However, the conventional MCMC algorithms tend to get trapped in local mode in simulating from the posterior distribution of change points. To overcome this problem, in this paper we propose a stochastic approximation Monte Carlo version of EM (SAMCEM), which is a combination of adaptive Markov chain Monte Carlo and EM utilizing a maximum likelihood method. SAMCEM is compared with the stochastic approximation version of EM and reversible jump Markov chain Monte Carlo version of EM on simulated and real datasets. The numerical results indicate that SAMCEM can outperform among the three methods by producing much more accurate parameter estimates and the ability to achieve change-point positions and estimates simultaneously.  相似文献   

19.
In this paper we consider the analysis of recall-based competing risks data. The chance of an individual recalling the exact time to event depends on the time of occurrence of the event and time of observation of the individual. In particular, it is assumed that the probability of recall depends on the time elapsed since the occurrence of an event. In this study we consider the likelihood-based inference for the analysis of recall-based competing risks data. The likelihood function is constructed by incorporating the information about the probability of recall. We consider the maximum likelihood estimation of parameters. Simulation studies are carried out to examine the performance of the estimators. The proposed estimation procedure is applied to a real life data set.  相似文献   

20.
An extension of the generalized linear mixed model was constructed to simultaneously accommodate overdispersion and hierarchies present in longitudinal or clustered data. This so‐called combined model includes conjugate random effects at observation level for overdispersion and normal random effects at subject level to handle correlation, respectively. A variety of data types can be handled in this way, using different members of the exponential family. Both maximum likelihood and Bayesian estimation for covariate effects and variance components were proposed. The focus of this paper is the development of an estimation procedure for the two sets of random effects. These are necessary when making predictions for future responses or their associated probabilities. Such (empirical) Bayes estimates will also be helpful in model diagnosis, both when checking the fit of the model as well as when investigating outlying observations. The proposed procedure is applied to three datasets of different outcome types. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号