共查询到18条相似文献,搜索用时 31 毫秒
1.
R.M. Huggins 《Australian & New Zealand Journal of Statistics》1993,35(1):43-57
Quantitative traits measured over pedigrees of individuals may be analysed using maximum likelihood estimation, assuming that the trait has a multivariate normal distribution. This approach is often used in the analysis of mixed linear models. In this paper a robust version of the log likelihood for multivariate normal data is used to construct M-estimators which are resistant to contamination by outliers. The robust estimators are found using a minimisation routine which retains the flexible parameterisations of the multivariate normal approach. Asymptotic properties of the estimators are derived, computation of the estimates and their use in outlier detection tests are discussed, and a small simulation study is conducted. 相似文献
2.
Jeffrey M. Albert Anant M. Kshirsagar 《Australian & New Zealand Journal of Statistics》1993,35(3):345-357
This paper presents a method of discriminant analysis especially suited to longitudinal data. The approach is in the spirit of canonical variate analysis (CVA) and is similarly intended to reduce the dimensionality of multivariate data while retaining information about group differences. A drawback of CVA is that it does not take advantage of special structures that may be anticipated in certain types of data. For longitudinal data, it is often appropriate to specify a growth curve structure (as given, for example, in the model of Potthoff & Roy, 1964). The present paper focuses on this growth curve structure, utilizing it in a model-based approach to discriminant analysis. For this purpose the paper presents an extension of the reduced-rank regression model, referred to as the reduced-rank growth curve (RRGC) model. It estimates discriminant functions via maximum likelihood and gives a procedure for determining dimensionality. This methodology is exploratory only, and is illustrated by a well-known dataset from Grizzle & Allen (1969). 相似文献
3.
Methods of detecting influential observations for the normal model for censored data are proposed. These methods include one-step deletion methods, deletion of observations and the empirical influence function. Emphasis is placed on assessing the impact that a single observation has on the estimation of coefficients of the model. Functions of the coefficients such as the median lifetime are also considered. Results are compared when applied to two sets of data. 相似文献
4.
Paul S.F. Yip Liqun Xi Richard Arnold Yu Hayakawa 《Australian & New Zealand Journal of Statistics》2005,47(3):299-308
This paper compares the properties of various estimators for a beta‐binomial model for estimating the size of a heterogeneous population. It is found that maximum likelihood and conditional maximum likelihood estimators perform well for a large population with a large capture proportion. The jackknife and the sample coverage estimators are biased for low capture probabilities. The performance of the martingale estimator is satisfactory, but it requires full capture histories. The Gibbs sampler and Metropolis‐Hastings algorithm provide reasonable posterior estimates for informative priors. 相似文献
5.
《统计学通讯:理论与方法》2013,42(11):1913-1926
ABSTRACT In this paper, we propose a new probability model called the log-EIG distribution for lifetime data analysis. Some important properties of the proposed model and maximum likelihood estimation of its parameters are discussed. Its relationship with the exponential inverse Gaussian distribution is similar to that of the lognormal and the normal distributions. Through applications to well-known datasets, we show that the log-EIG distribution competes well, and in some instances even provides a better fit than the commonly used lifetime models such as the gamma, lognormal, Weibull and inverse Gaussian distributions. It can accommodate situations where an increasing failure rate model is required as well as those with a decreasing failure rate at larger times. 相似文献
6.
The procedure of Verbyla & Cullis (1990) is extended to cater for the analysis of repeated measures data in which either non-linear modelling of the treatment contrasts is required and or there are time dependent covariates. These extensions are illustrated via two agricultural data sets. 相似文献
7.
This paper deals with the regression analysis of failure time data when there are censoring and multiple types of failures. We propose a semiparametric generalization of a parametric mixture model of Larson & Dinse (1985), for which the marginal probabilities of the various failure types are logistic functions of the covariates. Given the type of failure, the conditional distribution of the time to failure follows a proportional hazards model. A marginal like lihood approach to estimating regression parameters is suggested, whereby the baseline hazard functions are eliminated as nuisance parameters. The Monte Carlo method is used to approximate the marginal likelihood; the resulting function is maximized easily using existing software. Some guidelines for choosing the number of Monte Carlo replications are given. Fixing the regression parameters at their estimated values, the full likelihood is maximized via an EM algorithm to estimate the baseline survivor functions. The methods suggested are illustrated using the Stanford heart transplant data. 相似文献
8.
Scott D. Foster Arnas P. Verbyla Wayne S. Pitchford 《Australian & New Zealand Journal of Statistics》2009,51(1):43-61
The least absolute shrinkage and selection operator (LASSO) can be formulated as a random effects model with an associated variance parameter that can be estimated with other components of variance. In this paper, estimation of the variance parameters is performed by means of an approximation to the marginal likelihood of the observed outcomes. The approximation is based on an alternative but equivalent formulation of the LASSO random effects model. Predictions can be made using point summaries of the predictive distribution of the random effects given the data with the parameters set to their estimated values. The standard LASSO method uses the mode of this distribution as the predictor. It is not the only choice, and a number of other possibilities are defined and empirically assessed in this article. The predictive mode is competitive with the predictive mean (best predictor), but no single predictor performs best across in all situations. Inference for the LASSO random effects is performed using predictive probability statements, which are more appropriate under the random effects formulation than tests of hypothesis. 相似文献
9.
Keith Freeland 《Australian & New Zealand Journal of Statistics》2011,53(1):43-62
A test is derived for short‐memory correlation in the conditional variance of strictly positive, skewed data. The test is quasi‐locally most powerful (QLMP) under the assumption of conditionally gamma data. Analytical asymptotic relative efficiency calculations show that an alternative test, based on the first‐order autocorrelation coefficient of the squared data, has negligible relative power to detect correlation in the conditional variance. Finite‐sample simulation results confirm the poor performance of the squares‐based test for fixed alternatives, as well as demonstrating the poor performance of the test based on the first‐order autocorrelation coefficient of the raw (levels) data. The robustness of the QLMP test, both to misspecification of the conditional distribution and to misspecification of the dynamics, is also demonstrated using simulation. The test is illustrated using financial trade durations data. 相似文献
10.
A. James O'Malley Murray H. Smith William A. Sadler 《Australian & New Zealand Journal of Statistics》2008,50(2):161-177
Restricted maximum likelihood (REML) is a procedure for estimating a variance function in a heteroscedastic linear model. Although REML has been extended to non-linear models, the case in which the data are dominated by replicated observations with unknown values of the independent variable of interest, such as the concentration of a substance in a blood sample, has not been considered. We derive a REML procedure for an immunoassay and show that the resulting estimator is superior to those currently being used. Some interesting properties of the REML estimator are derived, and its relationship to other estimators is discussed. 相似文献
11.
Geert Molenberghs & Els Goetghebeur 《Journal of the Royal Statistical Society. Series B, Statistical methodology》1997,59(2):401-414
A popular approach to estimation based on incomplete data is the EM algorithm. For categorical data, this paper presents a simple expression of the observed data log-likelihood and its derivatives in terms of the complete data for a broad class of models and missing data patterns. We show that using the observed data likelihood directly is easy and has some advantages. One can gain considerable computational speed over the EM algorithm and a straightforward variance estimator is obtained for the parameter estimates. The general formulation treats a wide range of missing data problems in a uniform way. Two examples are worked out in full. 相似文献
12.
The p -variate Burr distribution has been derived, developed, discussed and deployed by various authors. In this paper a score statistic for testing independence of the components, equivalent to testing for p independent Weibull against a p -variate Burr alternative, is obtained. Its null and non-null properties are investigated with and without nuisance parameters and including the possibility of censoring. Two applications to real data are described. The test is also discussed in the context of other Weibull mixture models. 相似文献
13.
A NEW APPROACH TO MAXIMUM LIKELIHOOD ESTIMATION OF THE THREE-PARAMETER GAMMA AND WEIBULL DISTRIBUTIONS 总被引:2,自引:0,他引:2
Jun Bai Anthony J. Jakeman Michael McAleer 《Australian & New Zealand Journal of Statistics》1991,33(3):397-410
A new approach, is proposed for maximum likelihood (ML) estimation in continuous univariate distributions. The procedure is used primarily to complement the ML method which can fail in situations such as the gamma and Weibull distributions when the shape parameter is, at most, unity. The new approach provides consistent and efficient estimates for all possible values of the shape parameter. Its performance is examined via simulations. Two other, improved, general methods of ML are reported for comparative purposes. The methods are used to estimate the gamma and Weibull distributions using air pollution data from Melbourne. The new ML method is accurate when the shape parameter is less than unity and is also superior to the maximum product of spacings estimation method for the Weibull distribution. 相似文献
14.
ABSTRACTIn the present article we introduce a new class of distributions which nests the classical Logistic distribution and offers additional flexibility when data fitting is chased. We provide exact expressions for its moments and absolute moments, investigate its ageing properties, and discuss several techniques for estimating its parameters. Finally, we use the new family to build a parametric model that describes accurately the Euro/CAD exchange reference rates for the period 1/4/1999–12/31/2011. 相似文献
15.
We describe how a log-linear model can be used to compute the nonparametric maximum likelihood estimate of the survival curve from interval-censored data. This permits such computation to be performed with the aid of readily available statistical software such as GLIM or SAS. The method is illustrated with reference to data from a cohort of Danish homosexual men, each of whom was tested for HIV positivity on one or more of six possible follow-up times. 相似文献
16.
Martin Crowder 《Scandinavian Journal of Statistics》1998,25(1):53-67
A parametric multivariate failure time distribution is derived from a frailty-type model with a particular frailty distribution. It covers as special cases certain distributions which have been used for multivariate survival data in recent years. Some properties of the distribution are derived: its marginal and conditional distributions lie within the parametric family, and association between the component variates can be positive or, to a limited extent, negative. The simple closed form of the survivor function is useful for right-censored data, as occur commonly in survival analysis, and for calculating uniform residuals. Also featured is the distribution of ratios of paired failure times. The model is applied to data from the literature 相似文献
17.
In this paper we discuss the survival analysis for a clinical trial in which treatment categories and general prognostic data are realised at different stages during a patient's survival time. In the light of possible strategies for the parsimonious modelling of such data, a corresponding sequence of illustrative analyses is presented. Detailed results are given for a weighted least squares analysis and these generally agree with those obtained by maximum likelihood. 相似文献
18.
P. Besbeas S.N. Freeman B.J.T. Morgan 《Australian & New Zealand Journal of Statistics》2005,47(1):35-48
Recent work has shown how the Kalman filter can be used to provide a simple framework for the integrated analysis of wild animal census and mark‐recapture‐recovery data. The approach has been applied to data on a range of bird species, on Soay sheep and on grey seals. This paper reviews the basic ideas, and then indicates the potential of the method through a series of new applications to data on the northern lapwing, a species of conservation interest that has been in decline in Britain for the past 20 years. The paper analyses a national index, as well as data from individual sites; it looks for a change‐point in productivity, corresponding to the start of the decline in numbers, considers how to select appropriate covariates, and compares productivity between different habitats. The new procedures can be applied singly or in combination. 相似文献