首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
The maximum likelihood estimators of unknown parameters in the growth curve model with serial covariance structure under some conditions are derived in the paper.  相似文献   

3.
It is generally considered that analysis of variance by maximum likelihood or its variants is computationally impractical, despite existing techniques for reducing computational effect per iteration and for reducing the number of iterations to convergence. This paper shows thata major reduction in the overall computational effort can be achieved through the use of sparse-matrix algorithms that take advantage of the factorial designs that characterize most applications of large analysis-of-variance problems. In this paper, an algebraic structure for factorial designsis developed. Through this structure, it is shown that the required computations can be arranged so that sparse-matrix methods result in greatly reduced storage and time requirements.  相似文献   

4.
Including time-varying covariates is a popular extension to the Cox model and a suitable approach for dealing with non-proportional hazards. However, partial likelihood (PL) estimation of this model has three shortcomings: (i) estimated regression coefficients can be less accurate in small samples with heavy censoring; (ii) the baseline hazard is not directly estimated and (iii) a covariance matrix for both the regression coefficients and the baseline hazard is not easily produced.We address these by developing a maximum likelihood (ML) approach to jointly estimate regression coefficients and baseline hazard using a constrained optimisation ensuring the latter''s non-negativity. We demonstrate asymptotic properties of these estimates and show via simulation their increased accuracy compared to PL estimates in small samples and show our method produces smoother baseline hazard estimates than the Breslow estimator.Finally, we apply our method to two examples, including an important real-world financial example to estimate time to default for retail home loans. We demonstrate using our ML estimate for the baseline hazard can give much clearer corroboratory evidence of the ‘humped hazard’, whereby the risk of loan default rises to a peak and then later falls.  相似文献   

5.
We consider the problem of full information maximum likelihood (FIML) estimation in factor analysis when a majority of the data values are missing. The expectation–maximization (EM) algorithm is often used to find the FIML estimates, in which the missing values on manifest variables are included in complete data. However, the ordinary EM algorithm has an extremely high computational cost. In this paper, we propose a new algorithm that is based on the EM algorithm but that efficiently computes the FIML estimates. A significant improvement in the computational speed is realized by not treating the missing values on manifest variables as a part of complete data. When there are many missing data values, it is not clear if the FIML procedure can achieve good estimation accuracy. In order to investigate this, we conduct Monte Carlo simulations under a wide variety of sample sizes.  相似文献   

6.
In earlier work, Kirchner [An estimation procedure for the Hawkes process. Quant Financ. 2017;17(4):571–595], we introduced a nonparametric estimation method for the Hawkes point process. In this paper, we present a simulation study that compares this specific nonparametric method to maximum-likelihood estimation. We find that the standard deviations of both estimation methods decrease as power-laws in the sample size. Moreover, the standard deviations are proportional. For example, for a specific Hawkes model, the standard deviation of the branching coefficient estimate is roughly 20% larger than for MLE – over all sample sizes considered. This factor becomes smaller when the true underlying branching coefficient becomes larger. In terms of runtime, our method clearly outperforms MLE. The present bias of our method can be well explained and controlled. As an incidental finding, we see that also MLE estimates seem to be significantly biased when the underlying Hawkes model is near criticality. This asks for a more rigorous analysis of the Hawkes likelihood and its optimization.  相似文献   

7.
Summary.  A graph theoretical approach is employed to describe the support set of the nonparametric maximum likelihood estimator for the cumulative distribution function given interval-censored and left-truncated data. A necessary and sufficient condition for the existence of a nonparametric maximum likelihood estimator is then derived. Two previously analysed data sets are revisited.  相似文献   

8.
Quantitative cancer dose-response models play an important role in cancer risk assessment. They also play a role in regulatory processes associated with potential occupational or environmental exposures. The multistage model is currently the most widely used cancer dose-response model. This paper describes the construction of the likelihood function in the special case of the multistage cancer dose-response models. The concavity of the likelihood function is also established. A criterion is developed to determine the degree of the polynomial portion of the multistage model. Finally, the restricted and unrestricted maximum likelihood estimators are considered and applied to some experimental data sets.  相似文献   

9.
In this paper, we propose a new generalized autoregressive conditional heteroskedastic (GARCH) model using infinite normal scale-mixtures which can suitably avoid order selection problems in the application of finite normal scale-mixtures. We discuss its theoretical properties and develop a two-stage algorithm for the maximum likelihood estimator to estimate the mixing distribution non-parametric maximum likelihood estimator (NPMLE) as well as GARCH parameters (two-stage MLE). For the estimation of a mixing distribution, we employ a fast computational algorithm proposed by Wang [On fast computation of the non-parametric maximum likelihood estimate of a mixing distribution. J R Stat Soc Ser B. 2007;69:185–198] under the gradient characterization of the non-parametric mixture likelihood. The GARCH parameters are then estimated either using the expectation-mazimization algorithm or general optimization scheme. In addition, we propose a new forecasting algorithm of value-at-risk (VaR) using the two-stage MLE and the NPMLE. Through a simulation study and real data analysis, we compare the performance of the two-stage MLE with the existing ones including quasi-maximum likelihood estimator based on the standard normal density and the finite normal mixture quasi maximum estimated-likelihood estimator (cf. Lee S, Lee T. Inference for Box–Cox transformed threshold GARCH models with nuisance parameters. Scand J Stat. 2012;39:568–589) in terms of the relative efficiency and accuracy of VaR forecasting.  相似文献   

10.
Homoscedastic and heteroscedastic Gaussian mixtures differ in the constraints placed on the covariance matrices of the mixture components. A new mixture, called herein a strophoscedastic mixture, is defined by a new constraint, This constraint requires the matrices to be identical under orthogonal trans¬formations, where different transformations are allowed for different matrices. It is shown that the M-step of the EM method for estimating the parameters of strophoscedastic mixtures from sample data is explicitly solvable using singular value decompositions. Consequently, the EM-based maximum likelihood estimation algorithm is as easily implemented for strophoscedastic mixtures as it is for homoscedastic and heteroscedastic mixtures. An example of a “noisy” Archimedian spiral is presented.  相似文献   

11.
The posterior probability of an object belonging to one of two populations can be estimated using multivariate logistic regression. The bias associated with this procedure is derived In the context of normal populations with different mean vectors and a common covariance matrix and is compared with the bias of the classical method based on this normality assumption, -It Is found that the bias of the more robust procedure of logistic regression is of a lower order than that of the normality based method.  相似文献   

12.
Summary.  The paper discusses the estimation of an unknown population size n . Suppose that an identification mechanism can identify n obs cases. The Horvitz–Thompson estimator of n adjusts this number by the inverse of 1− p 0, where the latter is the probability of not identifying a case. When repeated counts of identifying the same case are available, we can use the counting distribution for estimating p 0 to solve the problem. Frequently, the Poisson distribution is used and, more recently, mixtures of Poisson distributions. Maximum likelihood estimation is discussed by means of the EM algorithm. For truncated Poisson mixtures, a nested EM algorithm is suggested and illustrated for several application cases. The algorithmic principles are used to show an inequality, stating that the Horvitz–Thompson estimator of n by using the mixed Poisson model is always at least as large as the estimator by using a homogeneous Poisson model. In turn, if the homogeneous Poisson model is misspecified it will, potentially strongly, underestimate the true population size. Examples from various areas illustrate this finding.  相似文献   

13.
This paper proposes a semi-parametric modelling and estimating method for analysing censored survival data. The proposed method uses the empirical likelihood function to describe the information in data, and formulates estimating equations to incorporate knowledge of the underlying distribution and regression structure. The method is more flexible than the traditional methods such as the parametric maximum likelihood estimation (MLE), Cox's (1972) proportional hazards model, accelerated life test model, quasi-likelihood (Wedderburn, 1974) and generalized estimating equations (Liang & Zeger, 1986). This paper shows the existence and uniqueness of the proposed semi-parametric maximum likelihood estimates (SMLE) with estimating equations. The method is validated with known cases studied in the literature. Several finite sample simulation and large sample efficiency studies indicate that when the sample size is larger than 100 the SMLE is compatible with the parametric MLE; and in all case studies, the SMLE is about 15% better than the parametric MLE with a mis-specified underlying distribution.  相似文献   

14.
For the problem of testing the homogeneity of the variances in a covariance matrix with a block compound symmetric structure, the likelihood ratio test is derived in this paper, A modification of the test that allows its distribution to be better approximated by the chi-square distribution is also considered, Formulae for calculating approximate sample size and power are derived, Small sample performances of these tests in the case of two dependent bivariate or trivariate normals are compared to each other and to the competing tests by simulating levels of significance and powers, and recommendation is made of the ones that have good performance, The recommended tests are then demonstrated in an illustrative example.  相似文献   

15.
Pharmacokinetic (PK) data often contain concentration measurements below the quantification limit (BQL). While specific values cannot be assigned to these observations, nevertheless these observed BQL data are informative and generally known to be lower than the lower limit of quantification (LLQ). Setting BQLs as missing data violates the usual missing at random (MAR) assumption applied to the statistical methods, and therefore leads to biased or less precise parameter estimation. By definition, these data lie within the interval [0, LLQ], and can be considered as censored observations. Statistical methods that handle censored data, such as maximum likelihood and Bayesian methods, are thus useful in the modelling of such data sets. The main aim of this work was to investigate the impact of the amount of BQL observations on the bias and precision of parameter estimates in population PK models (non‐linear mixed effects models in general) under maximum likelihood method as implemented in SAS and NONMEM, and a Bayesian approach using Markov chain Monte Carlo (MCMC) as applied in WinBUGS. A second aim was to compare these different methods in dealing with BQL or censored data in a practical situation. The evaluation was illustrated by simulation based on a simple PK model, where a number of data sets were simulated from a one‐compartment first‐order elimination PK model. Several quantification limits were applied to each of the simulated data to generate data sets with certain amounts of BQL data. The average percentage of BQL ranged from 25% to 75%. Their influence on the bias and precision of all population PK model parameters such as clearance and volume distribution under each estimation approach was explored and compared. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

16.
The multivariate Student-t copula family is used in statistical finance and other areas when there is tail dependence in the data. It often is a good-fitting copula but can be improved on when there is tail asymmetry. Multivariate skew-t copula families can be considered when there is tail dependence and tail asymmetry, and we show how a fast numerical implementation for maximum likelihood estimation is possible. For the copula implicit in a multivariate skew-t distribution, the fast implementation makes use of (i) monotone interpolation of the univariate marginal quantile function and (ii) a re-parametrization of the correlation matrix. Our numerical approach is tested with simulated data with data-driven parameters. A real data example involves the daily returns of three stock indices: the Nikkei225, S&P500 and DAX. With both unfiltered returns and GARCH/EGARCH filtered returns, we compare the fits of the Azzalini–Capitanio skew-t, generalized hyperbolic skew-t, Student-t, skew-Normal and Normal copulas.  相似文献   

17.
In medical studies we are often confronted with complex longitudinal data. During the follow-up period, which can be ended prematurely by a terminal event (e.g. death), a subject can experience recurrent events of multiple types. In addition, we collect repeated measurements from multiple markers. An adverse health status, represented by ‘bad’ marker values and an abnormal number of recurrent events, is often associated with the risk of experiencing the terminal event. In this situation, the missingness of the data is not at random and, to avoid bias, it is necessary to model all data simultaneously using a joint model. The correlations between the repeated observations of a marker or an event type within an individual are captured by normally distributed random effects. Because the joint likelihood contains an analytically intractable integral, Bayesian approaches or quadrature approximation techniques are necessary to evaluate the likelihood. However, when the number of recurrent event types and markers is large, the dimensionality of the integral is high and these methods are too computationally expensive. As an alternative, we propose a simulated maximum-likelihood approach based on quasi-Monte Carlo integration to evaluate the likelihood of joint models with multiple recurrent event types and markers.  相似文献   

18.
19.
Simulated maximum likelihood estimates an analytically intractable likelihood function with an empirical average based on data simulated from a suitable importance sampling distribution. In order to use simulated maximum likelihood in an efficient way, the choice of the importance sampling distribution as well as the mechanism to generate the simulated data are crucial. In this paper we develop a new heuristic for an automated, multistage implementation of simulated maximum likelihood which, by adaptively updating the importance sampler, approximates the (locally) optimal importance sampling distribution. The proposed approach also allows for a convenient incorporation of quasi-Monte Carlo methods. Quasi-Monte Carlo methods produce simulated data which can significantly increase the accuracy of the likelihood-estimate over regular Monte Carlo methods. Several examples provide evidence for the potential efficiency gain of this new method. We apply the method to a computationally challenging geostatistical model of online retailing.  相似文献   

20.
The authors achieve robust estimation of parametric models through the use of weighted maximum likelihood techniques. A new estimator is proposed and its good properties illustrated through examples. Ease of implementation is an attractive property of the new estimator. The new estimator downweights with respect to the model and can be used for complicated likelihoods such as those involved in bivariate extreme value problems. New weight functions, tailored for these problems, are constructed. The increased insight provided by our robust fits to these bivariate extreme value models is exhibited through the analysis of sea levels at two East Coast sites in the United Kingdom.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号