首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 578 毫秒
1.
A generalized self-consistency approach to maximum likelihood estimation (MLE) and model building was developed in Tsodikov [2003. Semiparametric models: a generalized self-consistency approach. J. Roy. Statist. Soc. Ser. B Statist. Methodology 65(3), 759–774] and applied to a survival analysis problem. We extend the framework to obtain second-order results such as information matrix and properties of the variance. Multinomial model motivates the paper and is used throughout as an example. Computational challenges with the multinomial likelihood motivated Baker [1994. The Multinomial–Poisson transformation. The Statist. 43, 495–504] to develop the Multinomial–Poisson (MP) transformation for a large variety of regression models with multinomial likelihood kernel. Multinomial regression is transformed into a Poisson regression at the cost of augmenting model parameters and restricting the problem to discrete covariates. Imposing normalization restrictions by means of Lagrange multipliers [Lang, J., 1996. On the comparison of multinomial and Poisson log-linear models. J. Roy. Statist. Soc. Ser. B Statist. Methodology 58, 253–266] justifies the approach. Using the self-consistency framework we develop an alternative solution to multinomial model fitting that does not require augmenting parameters while allowing for a Poisson likelihood and arbitrary covariate structures. Normalization restrictions are imposed by averaging over artificial “missing data” (fake mixture). Lack of probabilistic interpretation at the “complete-data” level makes the use of the generalized self-consistency machinery essential.  相似文献   

2.
Short analytical proofs are given for classical inequalities due to Daniels [1950. Rank correlation and population models. J. Roy. Statist. Soc. Ser. B 12, 171–181; 1951. Note on Durbin and Stuart's formula for E(rs). J. Roy. Statist. Soc. Ser. B 13, 310] and Durbin and Stuart [1951. Inversions and rank correlation coefficients. J. Roy. Statist. Soc. Ser. B 13, 303–309] relating Spearman's ρ and Kendall's τ.  相似文献   

3.
In this paper, we use a likelihood approach and the local influence method introduced by Cook [Assessment of local influence (with discussion). J Roy Statist Soc Ser B. 1986;48:133–149] to study a vector autoregressive (VAR) model. We present the maximum likelihood estimators and the information matrix. We establish the normal curvature and slope diagnostics for the VAR model under several perturbation schemes and use the Monte Carlo method to obtain benchmark values for determining the influence of directional diagnostics and possible influential observations. An empirical study using the VAR model to fit real data of monthly returns of IBM and S&P500 index illustrates the effectiveness of our proposed diagnostics.  相似文献   

4.
Outlining some recently obtained results of Hu and Rosenberger [2003. Optimality, variability, power: evaluating response-adaptive randomization procedures for treatment comparisons. J. Amer. Statist. Assoc. 98, 671–678] and Chen [2006. The power of Efron's biased coin design. J. Statist. Plann. Inference 136, 1824–1835] on the relationship between sequential randomized designs and the power of the usual statistical procedures for testing the equivalence of two competing treatments, the aim of this paper is to provide theoretical proofs of the numerical results of Chen [2006. The power of Efron's biased coin design. J. Statist. Plann. Inference 136, 1824–1835]. Furthermore, we prove that the Adjustable Biased Coin Design [Baldi Antognini A., Giovagnoli, A., 2004. A new “biased coin design” for the sequential allocation of two treatments. J. Roy. Statist. Soc. Ser. C 53, 651–664] is uniformly more powerful than the other “coin” designs proposed in the literature for any sample size.  相似文献   

5.
Abstract.  Multivariate failure time data frequently occur in medical studies and the dependence or association among survival variables is often of interest ( Biometrics , 51 , 1995, 1384; Stat. Med. , 18 , 1999, 3101; Biometrika , 87 , 2000, 879; J. Roy. Statist. Soc. Ser. B , 65 , 2003, 257). We study the problem of estimating the association between two related survival variables when they follow a copula model and only bivariate interval-censored failure time data are available. For the problem, a two-stage estimation procedure is proposed and the asymptotic properties of the proposed estimator are established. Simulation studies are conducted to assess the finite sample properties of the presented estimate and the results suggest that the method works well for practical situations. An example from an acquired immunodeficiency syndrome clinical trial is discussed.  相似文献   

6.
On the consistency of the maximum spacing method   总被引:1,自引:0,他引:1  
The main result of this paper is a consistency theorem for the maximum spacing method, a general method of estimating parameters in continuous univariate distributions, introduced by Cheng and Amin (J. Roy. Statist. Soc. Ser. A 45 (1983) 394–403) and independently by Ranneby (Scand. J. Statist. 11 (1984) 93–112). This main result generalizes a theorem of Ranneby (Scand. J. Statist. 11 (1984) 93–112). Also, some examples are given, which shows that this estimation method works also in cases where the maximum likelihood method breaks down.  相似文献   

7.
An alternative to the maximum likelihood (ML) method, the maximum spacing (MSP) method, is introduced in Cheng and Amin [1983. Estimating parameters in continuous univariate distributions with a shifted origin. J. Roy. Statist. Soc. Ser. B 45, 394–403], and independently in Ranneby [1984. The maximum spacing method. An estimation method related to the maximum likelihood method. Scand. J. Statist. 11, 93–112]. The method, as described by Ranneby [1984. The maximum spacing method. An estimation method related to the maximum likelihood method. Scand. J. Statist. 11, 93–112], is derived from an approximation of the Kullback–Leibler divergence. Since the introduction of the MSP method, several closely related methods have been suggested. This article is a survey of such methods based on spacings and the Kullback–Leibler divergence. These estimation methods possess good properties and they work in situations where the ML method does not. Important issues such as the handling of ties and incomplete data are discussed, and it is argued that by using Moran's [1951. The random division of an interval—Part II. J. Roy. Statist. Soc. Ser. B 13, 147–150] statistic, on which the MSP method is based, we can effectively combine: (a) a test on whether an assigned model of distribution functions is correct or not, (b) an asymptotically efficient estimation of an unknown parameter θ0θ0, and (c) a computation of a confidence region for θ0θ0.  相似文献   

8.
In event time data analysis, comparisons between distributions are made by the logrank test. When the data appear to contain crossing hazards phenomena, nonparametric weighted logrank statistics are usually suggested to accommodate different-weighted functions to increase the power. However, the gain in power by imposing different weights has its limits since differences before and after the crossing point may balance each other out. In contrast to the weighted logrank tests, we propose a score-type statistic based on the semiparametric-, heteroscedastic-hazards regression model of Hsieh [2001. On heteroscedastic hazards regression models: theory and application. J. Roy. Statist. Soc. Ser. B 63, 63–79.], by which the nonproportionality is explicitly modeled. Our score test is based on estimating functions derived from partial likelihood under the heteroscedastic model considered herein. Simulation results show the benefit of modeling the heteroscedasticity and power of the proposed test to two classes of weighted logrank tests (including Fleming–Harrington's test and Moreau's locally most powerful test), a Renyi-type test, and the Breslow's test for acceleration. We also demonstrate the application of this test by analyzing actual data in clinical trials.  相似文献   

9.
Jing Yang  Fang Lu  Hu Yang 《Statistics》2013,47(6):1193-1211
The outer product of gradients (OPG) estimation procedure based on least squares (LS) approach has been presented by Xia et al. [An adaptive estimation of dimension reduction space. J Roy Statist Soc Ser B. 2002;64:363–410] to estimate the single-index parameter in partially linear single-index models (PLSIM). However, its asymptotic property has not been established yet and the efficiency of LS-based method can be significantly affected by outliers and heavy-tailed distributions. In this paper, we firstly derive the asymptotic property of OPG estimator developed by Xia et al. [An adaptive estimation of dimension reduction space. J Roy Statist Soc Ser B. 2002;64:363–410] in theory, and a novel robust estimation procedure combining the ideas of OPG and local rank (LR) inference is further developed for PLSIM along with its theoretical property. Then, we theoretically derive the asymptotic relative efficiency (ARE) of the proposed LR-based procedure with respect to LS-based method, which is shown to possess an expression that is closely related to that of the signed-rank Wilcoxon test in comparison with the t-test. Moreover, we demonstrate that the new proposed estimator has a great efficiency gain across a wide spectrum of non-normal error distributions and almost not lose any efficiency for the normal error. Even in the worst case scenarios, the ARE owns a lower bound equalling to 0.864 for estimating the single-index parameter and a lower bound being 0.8896 for estimating the nonparametric function respectively, versus the LS-based estimators. Finally, some Monte Carlo simulations and a real data analysis are conducted to illustrate the finite sample performance of the estimators.  相似文献   

10.
Summary. The strength of statistical evidence is measured by the likelihood ratio. Two key performance properties of this measure are the probability of observing strong misleading evidence and the probability of observing weak evidence. For the likelihood function associated with a parametric statistical model, these probabilities have a simple large sample structure when the model is correct. Here we examine how that structure changes when the model fails. This leads to criteria for determining whether a given likelihood function is robust (continuing to perform satisfactorily when the model fails), and to a simple technique for adjusting both likelihoods and profile likelihoods to make them robust. We prove that the expected information in the robust adjusted likelihood cannot exceed the expected information in the likelihood function from a true model. We note that the robust adjusted likelihood is asymptotically fully efficient when the working model is correct, and we show that in some important examples this efficiency is retained even when the working model fails. In such cases the Bayes posterior probability distribution based on the adjusted likelihood is robust, remaining correct asymptotically even when the model for the observable random variable does not include the true distribution. Finally we note a link to standard frequentist methodology—in large samples the adjusted likelihood functions provide robust likelihood-based confidence intervals.  相似文献   

11.
Many of the usual criteria for optimal experimental designs do not take into account the different scale of the variance of the parameters. Dette [1997. Designing experiments with respect to “standardized” optimality criteria. J. Roy. Statist. Soc. Ser. B Stat. Methodol. 59(1), 97–110] provided a standardization based on the efficiencies for estimating each of the parameters. This approach provides designs with similar efficiencies for all of the parameters.  相似文献   

12.
This paper combines two ideas to construct autoregressive processes of arbitrary order. The first idea is the construction of first order stationary processes described in Pitt et al. [(2002). Constructing first order autoregressive models via latent processes. Scand. J. Statist.29, 657–663] and the second idea is the construction of higher order processes described in Raftery [(1985). A model for high order Markov chains. J. Roy. Statist. Soc. B.47, 528–539]. The resulting models provide appealing alternatives to model non-linear and non-Gaussian time series.  相似文献   

13.
This article analyses diffusion-type processes from a new point-of-view. Consider two statistical hypotheses on a diffusion process. We do not use a classical test to reject or accept one hypothesis using the Neyman–Pearson procedure and do not involve Bayesian approach. As an alternative, we propose using a likelihood paradigm to characterizing the statistical evidence in support of these hypotheses. The method is based on evidential inference introduced and described by Royall [Royall R. Statistical evidence: a likelihood paradigm. London: Chapman and Hall; 1997]. In this paper, we extend the theory of Royall to the case when data are observations from a diffusion-type process instead of iid observations. The empirical distribution of likelihood ratio is used to formulate the probability of strong, misleading and weak evidences. Since the strength of evidence can be affected by the sampling characteristics, we present a simulation study that demonstrates these effects. Also we try to control misleading evidence and reduce them by adjusting these characteristics. As an illustration, we apply the method to the Microsoft stock prices.  相似文献   

14.
A parametric robust test is proposed for comparing several coefficients of variation. This test is derived by properly correcting the normal likelihood function according to the technique suggested by Royall and Tsou. The proposed test statistic is asymptotically valid for general random variables, as long as their underlying distributions have finite fourth moments.

Simulation studies and real data analyses are provided to demonstrate the effectiveness of the novel robust procedure.  相似文献   

15.
For semiparametric models, interval estimation and hypothesis testing based on the information matrix for the full model is a challenge because of potentially unlimited dimension. Use of the profile information matrix for a small set of parameters of interest is an appealing alternative. Existing approaches for the estimation of the profile information matrix are either subject to the curse of dimensionality, or are ad-hoc and approximate and can be unstable and numerically inefficient. We propose a numerically stable and efficient algorithm that delivers an exact observed profile information matrix for regression coefficients for the class of Nonlinear Transformation Models [A. Tsodikov (2003) J R Statist Soc Ser B 65:759-774]. The algorithm deals with the curse of dimensionality and requires neither large matrix inverses nor explicit expressions for the profile surface.  相似文献   

16.
In two-phase linear regression models, it is a standard assumption that the random errors of two phases have constant variances. However, this assumption is not necessarily appropriate. This paper is devoted to the tests for variance heterogeneity in these models. We initially discuss the simultaneous test for variance heterogeneity of two phases. When the simultaneous test shows that significant heteroscedasticity occurs in the whole model, we construct two individual tests to investigate whether or not both phases or one of them have/has significant heteroscedasticity. Several score statistics and their adjustments based on Cox and Reid [D. R. Cox and N. Reid, Parameter orthogonality and approximate conditional inference. J. Roy. Statist. Soc. Ser. B 49 (1987), pp. 1–39] are obtained and illustrated with Australian onion data. The simulated powers of test statistics are investigated through Monte Carlo methods.  相似文献   

17.
Tsou (2003a) proposed a parametric procedure for making robust inference for mean regression parameters in the context of generalized linear models. This robust procedure is extended to model variance heterogeneity. The normal working model is adjusted to become asymptotically robust for inference about regression parameters of the variance function for practically all continuous response variables. The connection between the novel robust variance regression model and the estimating equations approach is also provided.  相似文献   

18.
This paper deals with testing for non-linearity in a regression model with one possibly non-linear component being estimated non-parametrically using smoothing splines. We propose two new variance–covariance based tests for detecting non-linearity applying a likelihood ratio hypothesis testing approach. The first test is for the inclusion of a possibly non-linear component and the second one is for linearity of a possibly non-linear component. The tests are based on a stochastic model in state space form given by Wahba (J. Roy. Statist. Soc. Ser. B 40 (1978) 364), Wecker and Ansley (J. Amer. Statist. Assoc. 78 (1983) 81) and de Jong and Mazzi (Modeling and smoothing unequally spaced sequence data, University of York and University of British Columbia, Unpublished paper) for which smoothing splines provide an optimal estimate. Pitrun (A smoothing spline approach to non-linear interface for time series, Department of Econometrics and Business Statistics, Monash University, Unpublished Ph.D. thesis) derived the variance–covariance structure of this model, which allows the use of a marginal likelihood approach. This leads naturally to marginal-likelihood based likelihood ratio tests for non-linearity. Small sample properties of the new tests have been investigated via Monte Carlo studies.  相似文献   

19.
A typical added variable plot is a commonly used plot in assessing the accuracy of a normal linear model. This plot is often used to evaluate the effect of adding an explanatory variable into the model and to detect possibly high leverage points or influential observations on the added variable. However, this type of plot is generally in doubt, once the normal distributional assumptions are violated. In this article, we extend the robust likelihood technique introduced by Royall and Tsou [11] to propose a robust added variable plot. The validity of this diagnostic plot requires no knowledge of the true underlying distributions so long as their second moments exist. The usefulness of the robust graphical approach is demonstrated through a few illustrations and simulations.  相似文献   

20.
In this paper, we show that if the Euclidean parameter of a semiparametric model can be estimated through an estimating function, we can extend straightforwardly conditions by Dmitrienko and Govindarajulu [2000. Ann. Statist. 28 (5), 1472–1501] in order to prove that the estimator indexed by any regular sequence (sequential estimator), has the same asymptotic behavior as the non-sequential estimator. These conditions also allow us to obtain the asymptotic normality of the stopping rule, for the special case of sequential confidence sets. These results are applied to the proportional hazards model, for which we show that after slight modifications, the classical assumptions given by Andersen and Gill [1982. Ann. Statist. 10(4), 1100–1120] are sufficient to obtain the asymptotic behavior of the sequential version of the well-known [Cox, 1972. J. Roy. Statist. Soc. Ser. B (34), 187–220] partial maximum likelihood estimator. To prove this result we need to establish a strong convergence result for the regression parameter estimator, involving mainly exponential inequalities for both continuous martingales and some basic empirical processes. A typical example of a fixed-width confidence interval is given and illustrated by a Monte Carlo study.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号