首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Supremum score test statistics are often used to evaluate hypotheses with unidentifiable nuisance parameters under the null hypothesis. Although these statistics provide an attractive framework to address non‐identifiability under the null hypothesis, little attention has been paid to their distributional properties in small to moderate sample size settings. In situations where there are identifiable nuisance parameters under the null hypothesis, these statistics may behave erratically in realistic samples as a result of a non‐negligible bias induced by substituting these nuisance parameters by their estimates under the null hypothesis. In this paper, we propose an adjustment to the supremum score statistics by subtracting the expected bias from the score processes and show that this adjustment does not alter the limiting null distribution of the supremum score statistics. Using a simple example from the class of zero‐inflated regression models for count data, we show empirically and theoretically that the adjusted tests are superior in terms of size and power. The practical utility of this methodology is illustrated using count data in HIV research.  相似文献   

2.
Recently Beh and Farver investigated and evaluated three non‐iterative procedures for estimating the linear‐by‐linear parameter of an ordinal log‐linear model. The study demonstrated that these non‐iterative techniques provide estimates that are, for most types of contingency tables, statistically indistinguishable from estimates from Newton's unidimensional algorithm. Here we show how two of these techniques are related using the Box–Cox transformation. We also show that by using this transformation, accurate non‐iterative estimates are achievable even when a contingency table contains sampling zeros.  相似文献   

3.
A fairly complete introduction to the large sample theory of parametric multinomial models, suitable for a second-year graduate course in categorical data analysis, can be based on Birch's theorem (1964) and the delta method (Bishop, Fienberg, and Holland 1975). I present an elementary derivation of a version of Birch's theorem using the implicit function theorem from advanced calculus, which allows the presentation to be relatively self-contained. The use of the delta method in deriving asymptotic distributions is illustrated by Rao's (1973) result on the distribution of standardized residuals, which complements the presentation in Bishop, Fienberg, and Holland. The asymptotic theory is illustrated by two examples.  相似文献   

4.
In this paper, a simulation study is conducted to systematically investigate the impact of different types of missing data on six different statistical analyses: four different likelihood‐based linear mixed effects models and analysis of covariance (ANCOVA) using two different data sets, in non‐inferiority trial settings for the analysis of longitudinal continuous data. ANCOVA is valid when the missing data are completely at random. Likelihood‐based linear mixed effects model approaches are valid when the missing data are at random. Pattern‐mixture model (PMM) was developed to incorporate non‐random missing mechanism. Our simulations suggest that two linear mixed effects models using unstructured covariance matrix for within‐subject correlation with no random effects or first‐order autoregressive covariance matrix for within‐subject correlation with random coefficient effects provide well control of type 1 error (T1E) rate when the missing data are completely at random or at random. ANCOVA using last observation carried forward imputed data set is the worst method in terms of bias and T1E rate. PMM does not show much improvement on controlling T1E rate compared with other linear mixed effects models when the missing data are not at random but is markedly inferior when the missing data are at random. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
The authors provide a rigorous large sample theory for linear models whose response variable has been subjected to the Box‐Cox transformation. They provide a continuous asymptotic approximation to the distribution of estimators of natural parameters of the model. They show, in particular, that the maximum likelihood estimator of the ratio of slope to residual standard deviation is consistent and relatively stable. The authors further show the importance for inference of normality of the errors and give tests for normality based on the estimated residuals. For non‐normal errors, they give adjustments to the log‐likelihood and to asymptotic standard errors.  相似文献   

6.
ABSTRACT

We propose a generalization of the one-dimensional Jeffreys' rule in order to obtain non informative prior distributions for non regular models, taking into account the comments made by Jeffreys in his article of 1946. These non informatives are parameterization invariant and the Bayesian intervals have good behavior in frequentist inference. In some important cases, we can generate non informative distributions for multi-parameter models with non regular parameters. In non regular models, the Bayesian method offers a satisfactory solution to the inference problem and also avoids the problem that the maximum likelihood estimator has with these models. Finally, we obtain non informative distributions in job-search and deterministic frontier production homogenous models.  相似文献   

7.
We present some lower bounds for the probability of zero for the class of count distributions having a log‐convex probability generating function, which includes compound and mixed‐Poisson distributions. These lower bounds allow the construction of new non‐parametric estimators of the number of unobserved zeros, which are useful for capture‐recapture models, or in areas like epidemiology and literary style analysis. Some of these bounds also lead to the well‐known Chao's and Turing's estimators. Several examples of application are analysed and discussed.  相似文献   

8.
ABSTRACT: We introduce a class of Toeplitz‐band matrices for simple goodness of fit tests for parametric regression models. For a given length r of the band matrix the asymptotic optimal solution is derived. Asymptotic normality of the corresponding test statistic is established under a fixed and random design assumption as well as for linear and non‐linear models, respectively. This allows testing at any parametric assumption as well as the computation of confidence intervals for a quadratic measure of discrepancy between the parametric model and the true signal g;. Furthermore, the connection between testing the parametric goodness of fit and estimating the error variance is highlighted. As a by‐product we obtain a much simpler proof of a result of 34 ) concerning the optimality of an estimator for the variance. Our results unify and generalize recent results by 9 ) and 15 , 16 ) in several directions. Extensions to multivariate predictors and unbounded signals are discussed. A simulation study shows that a simple jacknife correction of the proposed test statistics leads to reasonable finite sample approximations.  相似文献   

9.
This paper presents a non‐parametric method for estimating the conditional density associated to the jump rate of a piecewise‐deterministic Markov process. In our framework, the estimation needs only one observation of the process within a long time interval. Our method relies on a generalization of Aalen's multiplicative intensity model. We prove the uniform consistency of our estimator, under some reasonable assumptions related to the primitive characteristics of the process. A simulation study illustrates the behaviour of our estimator.  相似文献   

10.
Abstract. General autoregressive moving average (ARMA) models extend the traditional ARMA models by removing the assumptions of causality and invertibility. The assumptions are not required under a non‐Gaussian setting for the identifiability of the model parameters in contrast to the Gaussian setting. We study M‐estimation for general ARMA processes with infinite variance, where the distribution of innovations is in the domain of attraction of a non‐Gaussian stable law. Following the approach taken by Davis et al. (1992) and Davis (1996) , we derive a functional limit theorem for random processes based on the objective function, and establish asymptotic properties of the M‐estimator. We also consider bootstrapping the M‐estimator and extend the results of Davis & Wu (1997) to the present setting so that statistical inferences are readily implemented. Simulation studies are conducted to evaluate the finite sample performance of the M‐estimation and bootstrap procedures. An empirical example of financial time series is also provided.  相似文献   

11.
We develop simple necessary and sufficient conditions for a hierarchical log linear model to be strictly collapsible in the sense defined by Whittemore (1978). We then show that collapsibility as defined by Asmussen & Edwards (1983) can be viewed as equivalent to collapsibility as defined by Whittemore (1978) and illustrate why Bishop, Fienberg, & Holland's (1975, p.47) conditions for collapsibility are sufficient but not necessary. Finally, we discuss how collapsibility facilitates interpretation of certain hierarchical log linear models and formulation of hypotheses concerning marginal distributions associated with multidimensional contingency tables.  相似文献   

12.
We introduce two types of graphical log‐linear models: label‐ and level‐invariant models for triangle‐free graphs. These models generalise symmetry concepts in graphical log‐linear models and provide a tool with which to model symmetry in the discrete case. A label‐invariant model is category‐invariant and is preserved after permuting some of the vertices according to transformations that maintain the graph, whereas a level‐invariant model equates expected frequencies according to a given set of permutations. These new models can both be seen as instances of a new type of graphical log‐linear model termed the restricted graphical log‐linear model, or RGLL, in which equality restrictions on subsets of main effects and first‐order interactions are imposed. Their likelihood equations and graphical representation can be obtained from those derived for the RGLL models.  相似文献   

13.
Relative risks are often considered preferable to odds ratios for quantifying the association between a predictor and a binary outcome. Relative risk regression is an alternative to logistic regression where the parameters are relative risks rather than odds ratios. It uses a log link binomial generalised linear model, or log‐binomial model, which requires parameter constraints to prevent probabilities from exceeding 1. This leads to numerical problems with standard approaches for finding the maximum likelihood estimate (MLE), such as Fisher scoring, and has motivated various non‐MLE approaches. In this paper we discuss the roles of the MLE and its main competitors for relative risk regression. It is argued that reliable alternatives to Fisher scoring mean that numerical issues are no longer a motivation for non‐MLE methods. Nonetheless, non‐MLE methods may be worthwhile for other reasons and we evaluate this possibility for alternatives within a class of quasi‐likelihood methods. The MLE obtained using a reliable computational method is recommended, but this approach requires bootstrapping when estimates are on the parameter space boundary. If convenience is paramount, then quasi‐likelihood estimation can be a good alternative, although parameter constraints may be violated. Sensitivity to model misspecification and outliers is also discussed along with recommendations and priorities for future research.  相似文献   

14.
Many model‐free dimension reduction methods have been developed for high‐dimensional regression data but have not paid much attention on problems with non‐linear confounding. In this paper, we propose an inverse‐regression method of dependent variable transformation for detecting the presence of non‐linear confounding. The benefit of using geometrical information from our method is highlighted. A ratio estimation strategy is incorporated in our approach to enhance the interpretation of variable selection. This approach can be implemented not only in principal Hessian directions (PHD) but also in other recently developed dimension reduction methods. Several simulation examples that are reported for illustration and comparisons are made with sliced inverse regression and PHD in ignorance of non‐linear confounding. An illustrative application to one real data is also presented.  相似文献   

15.
This paper investigates a new random contraction scheme which complements the length‐biasing and convolution contraction schemes considered in the literature. A random power contraction is used with order statistics, leading to new and elegant characterizations of the power distribution. In view of Rossberg's counter‐example of a non‐exponential law with exponentially distributed spacings of order statistics, possibly the most appealing consequence of the result is a characterization of the exponential distribution via an independent exponential shift of order statistics.  相似文献   

16.
Huber's estimator has had a long lasting impact, particularly on robust statistics. It is well known that under certain conditions, Huber's estimator is asymptotically minimax. A moderate generalization in rederiving Huber's estimator shows that Huber's estimator is not the only choice. We develop an alternative asymptotic minimax estimator and name it regression with stochastically bounded noise (RSBN). Simulations demonstrate that RSBN is slightly better in performance, although it is unclear how to justify such an improvement theoretically. We propose two numerical solutions: an iterative numerical solution, which is extremely easy to implement and is based on the proximal point method; and a solution by applying state-of-the-art nonlinear optimization software packages, e.g., SNOPT. Contribution: the generalization of the variational approach is interesting and should be useful in deriving other asymptotic minimax estimators in other problems.  相似文献   

17.
This paper is about vector autoregressive‐moving average models with time‐dependent coefficients to represent non‐stationary time series. Contrary to other papers in the univariate case, the coefficients depend on time but not on the series' length n. Under appropriate assumptions, it is shown that a Gaussian quasi‐maximum likelihood estimator is almost surely consistent and asymptotically normal. The theoretical results are illustrated by means of two examples of bivariate processes. It is shown that the assumptions underlying the theoretical results apply. In the second example, the innovations are marginally heteroscedastic with a correlation ranging from ?0.8 to 0.8. In the two examples, the asymptotic information matrix is obtained in the Gaussian case. Finally, the finite‐sample behaviour is checked via a Monte Carlo simulation study for n from 25 to 400. The results confirm the validity of the asymptotic properties even for short series and the asymptotic information matrix deduced from the theory.  相似文献   

18.
A consistent approach to the problem of testing non‐correlation between two univariate infinite‐order autoregressive models was proposed by Hong (1996). His test is based on a weighted sum of squares of residual cross‐correlations, with weights depending on a kernel function. In this paper, the author follows Hong's approach to test non‐correlation of two cointegrated (or partially non‐stationary) ARMA time series. The test of Pham, Roy & Cédras (2003) may be seen as a special case of his approach, as it corresponds to the choice of a truncated uniform kernel. The proposed procedure remains valid for testing non‐correlation between two stationary invertible multivariate ARMA time series. The author derives the asymptotic distribution of his test statistics under the null hypothesis and proves that his procedures are consistent. He also studies the level and power of his proposed tests in finite samples through simulation. Finally, he presents an illustration based on real data.  相似文献   

19.
Test statistics for checking the independence between the innovations of several time series are developed. The time series models considered allow for general specifications for the conditional mean and variance functions that could depend on common explanatory variables. In testing for independence between more than two time series, checking pairwise independence does not lead to consistent procedures. Thus a finite family of empirical processes relying on multivariate lagged residuals are constructed, and we derive their asymptotic distributions. In order to obtain simple asymptotic covariance structures, Möbius transformations of the empirical processes are studied, and simplifications occur. Under the null hypothesis of independence, we show that these transformed processes are asymptotically Gaussian, independent, and with tractable covariance functions not depending on the estimated parameters. Various procedures are discussed, including Cramér–von Mises test statistics and tests based on non‐parametric measures. The ranks of the residuals are considered in the new methods, giving test statistics which are asymptotically margin‐free. Generalized cross‐correlations are introduced, extending the concept of cross‐correlation to an arbitrary number of time series; portmanteau procedures based on them are discussed. In order to detect the dependence visually, graphical devices are proposed. Simulations are conducted to explore the finite sample properties of the methodology, which is found to be powerful against various types of alternatives when the independence is tested between two and three time series. An application is considered, using the daily log‐returns of Apple, Intel and Hewlett‐Packard traded on the Nasdaq financial market. The Canadian Journal of Statistics 40: 447–479; 2012 © 2012 Statistical Society of Canada  相似文献   

20.
With the emergence of novel therapies exhibiting distinct mechanisms of action compared to traditional treatments, departure from the proportional hazard (PH) assumption in clinical trials with a time‐to‐event end point is increasingly common. In these situations, the hazard ratio may not be a valid statistical measurement of treatment effect, and the log‐rank test may no longer be the most powerful statistical test. The restricted mean survival time (RMST) is an alternative robust and clinically interpretable summary measure that does not rely on the PH assumption. We conduct extensive simulations to evaluate the performance and operating characteristics of the RMST‐based inference and against the hazard ratio–based inference, under various scenarios and design parameter setups. The log‐rank test is generally a powerful test when there is evident separation favoring 1 treatment arm at most of the time points across the Kaplan‐Meier survival curves, but the performance of the RMST test is similar. Under non‐PH scenarios where late separation of survival curves is observed, the RMST‐based test has better performance than the log‐rank test when the truncation time is reasonably close to the tail of the observed curves. Furthermore, when flat survival tail (or low event rate) in the experimental arm is expected, selecting the minimum of the maximum observed event time as the truncation timepoint for the RMST is not recommended. In addition, we recommend the inclusion of analysis based on the RMST curve over the truncation time in clinical settings where there is suspicion of substantial departure from the PH assumption.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号