首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
Recently Beh and Farver investigated and evaluated three non‐iterative procedures for estimating the linear‐by‐linear parameter of an ordinal log‐linear model. The study demonstrated that these non‐iterative techniques provide estimates that are, for most types of contingency tables, statistically indistinguishable from estimates from Newton's unidimensional algorithm. Here we show how two of these techniques are related using the Box–Cox transformation. We also show that by using this transformation, accurate non‐iterative estimates are achievable even when a contingency table contains sampling zeros.  相似文献   

2.
The authors derive closed‐form expressions for the full, profile, conditional and modified profile likelihood functions for a class of random growth parameter models they develop as well as Garcia's additive model. These expressions facilitate the determination of parameter estimates for both types of models. The profile, conditional and modified profile likelihood functions are maximized over few parameters to yield a complete set of parameter estimates. In the development of their random growth parameter models the authors specify the drift and diffusion coefficients of the growth parameter process in a natural way which gives interpretive meaning to these coefficients while yielding highly tractable models. They fit several of their random growth parameter models and Garcia's additive model to stock market data, and discuss the results. The Canadian Journal of Statistics 38: 474–487; 2010 © 2010 Statistical Society of Canada  相似文献   

3.
The authors propose a simple but general method of inference for a parametric function of the Box‐Cox‐type transformation model. Their approach is built upon the classical normal theory but takes parameter estimation into account. It quickly leads to test statistics and confidence intervals for a linear combination of scaled or unsealed regression coefficients, as well as for the survivor function and marginal effects on the median or other quantité functions of an original response. The authors show through simulations that the finite‐sample performance of their method is often superior to the delta method, and that their approach is robust to mild departures from normality of error distributions. They illustrate their approach with a numerical example.  相似文献   

4.
Abstract.  The Pearson diffusions form a flexible class of diffusions defined by having linear drift and quadratic squared diffusion coefficient. It is demonstrated that for this class explicit statistical inference is feasible. A complete model classification is presented for the ergodic Pearson diffusions. The class of stationary distributions equals the full Pearson system of distributions. Well-known instances are the Ornstein–Uhlenbeck processes and the square root (CIR) processes. Also diffusions with heavy-tailed and skew marginals are included. Explicit formulae for the conditional moments and the polynomial eigenfunctions are derived. Explicit optimal martingale estimating functions are found. The discussion covers GMM, quasi-likelihood, non-linear weighted least squares estimation and likelihood inference too. The analytical tractability is inherited by transformed Pearson diffusions, integrated Pearson diffusions, sums of Pearson diffusions and Pearson stochastic volatility models. For the non-Markov models, explicit optimal prediction-based estimating functions are found. The estimators are shown to be consistent and asymptotically normal.  相似文献   

5.
We consider a stochastic differential equation involving standard and fractional Brownian motion with unknown drift parameter to be estimated. We investigate the standard maximum likelihood estimate of the drift parameter, two non-standard estimates and three estimates for the sequential estimation. Model strong consistency and some other properties are proved. The linear model and Ornstein–Uhlenbeck model are studied in detail. As an auxiliary result, an asymptotic behaviour of the fractional derivative of the fractional Brownian motion is established.  相似文献   

6.
In this paper, a semi‐parametric single‐index model is investigated. The link function is allowed to be unbounded and has unbounded support that answers a pending issue in the literature. Meanwhile, the link function is treated as a point in an infinitely many dimensional function space which enables us to derive the estimates for the index parameter and the link function simultaneously. This approach is different from the profile method commonly used in the literature. The estimator is derived from an optimisation with the constraint of identification condition for the index parameter, which addresses an important problem in the literature of single‐index models. In addition, making use of a property of Hermite orthogonal polynomials, an explicit estimator for the index parameter is obtained. Asymptotic properties for the two estimators of the index parameter are established. Their efficiency is discussed in some special cases as well. The finite sample properties of the two estimates are demonstrated through an extensive Monte Carlo study and an empirical example.  相似文献   

7.
During drug development, the calculation of inhibitory concentration that results in a response of 50% (IC50) is performed thousands of times every day. The nonlinear model most often used to perform this calculation is a four‐parameter logistic, suitably parameterized to estimate the IC50 directly. When performing these calculations in a high‐throughput mode, each and every curve cannot be studied in detail, and outliers in the responses are a common problem. A robust estimation procedure to perform this calculation is desirable. In this paper, a rank‐based estimate of the four‐parameter logistic model that is analogous to least squares is proposed. The rank‐based estimate is based on the Wilcoxon norm. The robust procedure is illustrated with several examples from the pharmaceutical industry. When no outliers are present in the data, the robust estimate of IC50 is comparable with the least squares estimate, and when outliers are present in the data, the robust estimate is more accurate. A robust goodness‐of‐fit test is also proposed. To investigate the impact of outliers on the traditional and robust estimates, a small simulation study was conducted. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
Nonlinear mixed‐effects models are being widely used for the analysis of longitudinal data, especially from pharmaceutical research. They use random effects which are latent and unobservable variables so the random‐effects distribution is subject to misspecification in practice. In this paper, we first study the consequences of misspecifying the random‐effects distribution in nonlinear mixed‐effects models. Our study is focused on Gauss‐Hermite quadrature, which is now the routine method for calculation of the marginal likelihood in mixed models. We then present a formal diagnostic test to check the appropriateness of the assumed random‐effects distribution in nonlinear mixed‐effects models, which is very useful for real data analysis. Our findings show that the estimates of fixed‐effects parameters in nonlinear mixed‐effects models are generally robust to deviations from normality of the random‐effects distribution, but the estimates of variance components are very sensitive to the distributional assumption of random effects. Furthermore, a misspecified random‐effects distribution will either overestimate or underestimate the predictions of random effects. We illustrate the results using a real data application from an intensive pharmacokinetic study.  相似文献   

9.
We advocate the use of an Indirect Inference method to estimate the parameter of a COGARCH(1,1) process for equally spaced observations. This requires that the true model can be simulated and a reasonable estimation method for an approximate auxiliary model. We follow previous approaches and use linear projections leading to an auxiliary autoregressive model for the squared COGARCH returns. The asymptotic theory of the Indirect Inference estimator relies on a uniform strong law of large numbers and asymptotic normality of the parameter estimates of the auxiliary model, which require continuity and differentiability of the COGARCH process with respect to its parameter and which we prove via Kolmogorov's continuity criterion. This leads to consistent and asymptotically normal Indirect Inference estimates under moment conditions on the driving Lévy process. A simulation study shows that the method yields a substantial finite sample bias reduction compared with previous estimators.  相似文献   

10.
The authors address the problem of likelihood‐based inference for correlated diffusions. Such a task presents two issues; the positive definite constraints of the diffusion matrix and the likelihood intractability. The first issue is handled by using the Cholesky factorization on the diffusion matrix. To deal with the likelihood unavailability, a generalization of the data augmentation framework of Roberts and Stramer [Roberts and Stramer (2001) Biometrika 88(3), 603–621] to d‐dimensional correlated diffusions, including multivariate stochastic volatility models, is given. The methodology is illustrated through simulated and real data sets. The Canadian Journal of Statistics 39: 52–72; 2011 © 2011 Statistical Society of Canada  相似文献   

11.
Investigators often gather longitudinal data to assess changes in responses over time within subjects and to relate these changes to within‐subject changes in predictors. Missing data are common in such studies and predictors can be correlated with subject‐specific effects. Maximum likelihood methods for generalized linear mixed models provide consistent estimates when the data are ‘missing at random’ (MAR) but can produce inconsistent estimates in settings where the random effects are correlated with one of the predictors. On the other hand, conditional maximum likelihood methods (and closely related maximum likelihood methods that partition covariates into between‐ and within‐cluster components) provide consistent estimation when random effects are correlated with predictors but can produce inconsistent covariate effect estimates when data are MAR. Using theory, simulation studies, and fits to example data this paper shows that decomposition methods using complete covariate information produce consistent estimates. In some practical cases these methods, that ostensibly require complete covariate information, actually only involve the observed covariates. These results offer an easy‐to‐use approach to simultaneously protect against bias from both cluster‐level confounding and MAR missingness in assessments of change.  相似文献   

12.
The authors explore likelihood‐based methods for making inferences about the components of variance in a general normal mixed linear model. In particular, they use local asymptotic approximations to construct confidence intervals for the components of variance when the components are close to the boundary of the parameter space. In the process, they explore the question of how to profile the restricted likelihood (REML). Also, they show that general REML estimates are less likely to fall on the boundary of the parameter space than maximum‐likelihood estimates and that the likelihood‐ratio test based on the local asymptotic approximation has higher power than the likelihood‐ratio test based on the usual chi‐squared approximation. They examine the finite‐sample properties of the proposed intervals by means of a simulation study.  相似文献   

13.
The problem of nonparametric drift estimation for ergodic diffusions is studied from a Bayesian perspective. In particular, Gaussian process priors are exhibited that yield optimal contraction rates if the drift function belongs to a smoothness class.  相似文献   

14.
A multi‐sample test for equality of mean directions is developed for populations having Langevin‐von Mises‐Fisher distributions with a common unknown concentration. The proposed test statistic is a monotone transformation of the likelihood ratio. The high‐concentration asymptotic null distribution of the test statistic is derived. In contrast to previously suggested high‐concentration tests, the high‐concentration asymptotic approximation to the null distribution of the proposed test statistic is also valid for large sample sizes with any fixed nonzero concentration parameter. Simulations of size and power show that the proposed test outperforms competing tests. An example with three‐dimensional data from an anthropological study illustrates the practical application of the testing procedure.  相似文献   

15.
Likelihood‐based inference with missing data is challenging because the observed log likelihood is often an (intractable) integration over the missing data distribution, which also depends on the unknown parameter. Approximating the integral by Monte Carlo sampling does not necessarily lead to a valid likelihood over the entire parameter space because the Monte Carlo samples are generated from a distribution with a fixed parameter value. We consider approximating the observed log likelihood based on importance sampling. In the proposed method, the dependency of the integral on the parameter is properly reflected through fractional weights. We discuss constructing a confidence interval using the profile likelihood ratio test. A Newton–Raphson algorithm is employed to find the interval end points. Two limited simulation studies show the advantage of the Wilks inference over the Wald inference in terms of power, parameter space conformity and computational efficiency. A real data example on salamander mating shows that our method also works well with high‐dimensional missing data.  相似文献   

16.
Abstract. This paper proposes, implements and investigates a new non‐parametric two‐sample test for detecting stochastic dominance. We pose the question of detecting the stochastic dominance in a non‐standard way. This is motivated by existing evidence showing that standard formulations and pertaining procedures may lead to serious errors in inference. The procedure that we introduce matches testing and model selection. More precisely, we reparametrize the testing problem in terms of Fourier coefficients of well‐known comparison densities. Next, the estimated Fourier coefficients are used to form a kind of signed smooth rank statistic. In such a setting, the number of Fourier coefficients incorporated into the statistic is a smoothing parameter. We determine this parameter via some flexible selection rule. We establish the asymptotic properties of the new test under null and alternative hypotheses. The finite sample performance of the new solution is demonstrated through Monte Carlo studies and an application to a set of survival times.  相似文献   

17.
Abstract. The conditional score approach is proposed to the analysis of errors‐in‐variable current status data under the proportional odds model. Distinct from the conditional scores in other applications, the proposed conditional score involves a high‐dimensional nuisance parameter, causing challenges in both asymptotic theory and computation. We propose a composite algorithm combining the Newton–Raphson and self‐consistency algorithms for computation and develop an efficient conditional score, analogous to the efficient score from a typical semiparametric likelihood, for building an asymptotic linear expression and hence the asymptotic distribution of the conditional‐score estimator for the regression parameter. Our proposal is shown to perform well in simulation studies and is applied to a zebrafish basal cell carcinoma data involving measurement errors in gene expression levels.  相似文献   

18.
Abstract. We propose an extension of graphical log‐linear models to allow for symmetry constraints on some interaction parameters that represent homologous factors. The conditional independence structure of such quasi‐symmetric (QS) graphical models is described by an undirected graph with coloured edges, in which a particular colour corresponds to a set of equality constraints on a set of parameters. Unlike standard QS models, the proposed models apply with contingency tables for which only some variables or sets of the variables have the same categories. We study the graphical properties of such models, including conditions for decomposition of model parameters and of maximum likelihood estimates.  相似文献   

19.
Network meta‐analysis can be implemented by using arm‐based or contrast‐based models. Here we focus on arm‐based models and fit them using generalized linear mixed model procedures. Full maximum likelihood (ML) estimation leads to biased trial‐by‐treatment interaction variance estimates for heterogeneity. Thus, our objective is to investigate alternative approaches to variance estimation that reduce bias compared with full ML. Specifically, we use penalized quasi‐likelihood/pseudo‐likelihood and hierarchical (h) likelihood approaches. In addition, we consider a novel model modification that yields estimators akin to the residual maximum likelihood estimator for linear mixed models. The proposed methods are compared by simulation, and 2 real datasets are used for illustration. Simulations show that penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood reduce bias and yield satisfactory coverage rates. Sum‐to‐zero restriction and baseline contrasts for random trial‐by‐treatment interaction effects, as well as a residual ML‐like adjustment, also reduce bias compared with an unconstrained model when ML is used, but coverage rates are not quite as good. Penalized quasi‐likelihood/pseudo‐likelihood and h‐likelihood are therefore recommended.  相似文献   

20.
In survey sampling, policymaking regarding the allocation of resources to subgroups (called small areas) or the determination of subgroups with specific properties in a population should be based on reliable estimates. Information, however, is often collected at a different scale than that of these subgroups; hence, the estimation can only be obtained on finer scale data. Parametric mixed models are commonly used in small‐area estimation. The relationship between predictors and response, however, may not be linear in some real situations. Recently, small‐area estimation using a generalised linear mixed model (GLMM) with a penalised spline (P‐spline) regression model, for the fixed part of the model, has been proposed to analyse cross‐sectional responses, both normal and non‐normal. However, there are many situations in which the responses in small areas are serially dependent over time. Such a situation is exemplified by a data set on the annual number of visits to physicians by patients seeking treatment for asthma, in different areas of Manitoba, Canada. In cases where covariates that can possibly predict physician visits by asthma patients (e.g. age and genetic and environmental factors) may not have a linear relationship with the response, new models for analysing such data sets are required. In the current work, using both time‐series and cross‐sectional data methods, we propose P‐spline regression models for small‐area estimation under GLMMs. Our proposed model covers both normal and non‐normal responses. In particular, the empirical best predictors of small‐area parameters and their corresponding prediction intervals are studied with the maximum likelihood estimation approach being used to estimate the model parameters. The performance of the proposed approach is evaluated using some simulations and also by analysing two real data sets (precipitation and asthma).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号