首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 887 毫秒
1.
We obtain adjustments to the profile likelihood function in Weibull regression models with and without censoring. Specifically, we consider two different modified profile likelihoods: (i) the one proposed by Cox and Reid [Cox, D.R. and Reid, N., 1987, Parameter orthogonality and approximate conditional inference. Journal of the Royal Statistical Society B, 49, 1–39.], and (ii) an approximation to the one proposed by Barndorff–Nielsen [Barndorff–Nielsen, O.E., 1983, On a formula for the distribution of the maximum likelihood estimator. Biometrika, 70, 343–365.], the approximation having been obtained using the results by Fraser and Reid [Fraser, D.A.S. and Reid, N., 1995, Ancillaries and third-order significance. Utilitas Mathematica, 47, 33–53.] and by Fraser et al. [Fraser, D.A.S., Reid, N. and Wu, J., 1999, A simple formula for tail probabilities for frequentist and Bayesian inference. Biometrika, 86, 655–661.]. We focus on point estimation and likelihood ratio tests on the shape parameter in the class of Weibull regression models. We derive some distributional properties of the different maximum likelihood estimators and likelihood ratio tests. The numerical evidence presented in the paper favors the approximation to Barndorff–Nielsen's adjustment.  相似文献   

2.
ABSTRACT

In this paper, the maximum value test is proposed and considered for two-sample problem solving with lifetime data. This test is a distribution-free test under non-censoring and is a not distribution-free test under censoring. The formula of the limit distribution of the proposed maximal value test is represented in the general case. The distribution of the test statistic has been studied experimentally. Also, we propose the estimate of a p-value calculation of the maximum value test instead of the Monte-Carlo simulation. This test is useful and applicable in case of choosing among the logrank test, the Cox–Mantel test, the Q test and Generalized Wilcoxon tests, for instance, the Gehan's Generalized Wilcoxon test and the Peto and Peto's Generalized Wilcoxon test.  相似文献   

3.

This article presents methods for constructing confidence intervals for the median of a finite population under simple random sampling without replacement, stratified random sampling, and cluster sampling. The confidence intervals, as well as point estimates and test statistics, are derived from sign estimating functions which are based on the well-known sign test. Therefore, a unified approach for inference about the median of a finite population is given.  相似文献   

4.
Abstract

This paper focuses on inference based on the confidence distributions of the nonparametric regression function and its derivatives, in which dependent inferences are combined by obtaining information about their dependency structure. We first give a motivating example in production operation system to illustrate the necessity of the problems studied in this paper in practical applications. A goodness-of-fit test for polynomial regression model is proposed on the basis of the idea of combined confidence distribution inference, which is the Fisher’s combination statistic in some cases. On the basis of this testing results, a combined estimator for the p-order derivative of nonparametric regression function is provided as well as its large sample size properties. Consequently, the performances of the proposed test and estimation method are illustrated by three specific examples. Finally, the motivating example is analyzed in detail. The simulated and real data examples illustrate the good performance and practicability of the proposed methods based on confidence distribution.  相似文献   

5.
In this article, we consider the problem of testing for variance breaks in time series in the presence of a changing trend. In performing the test, we employ the cumulative sum of squares (CUSSQ) test introduced by Inclán and Tiao (1994, J.?Amer.?Statist.?Assoc., 89, 913 ? 923). It is shown that CUSSQ test is not robust in the case of broken trend and its asymptotic distribution does not convergence to the sup of a standard Brownian bridge. As a remedy, a bootstrap approximation method is designed to alleviate the size distortions of test statistic while preserving its high power. Via a bootstrap functional central limit theorem, the consistency of these bootstrap procedures is established under general assumptions. Simulation results are provided for illustration and an empirical example of application to a set of high frequency real data is given.  相似文献   

6.
Abstract

This article introduces a parametric robust way of comparing two population means and two population variances. With large samples the comparison of two means, under model misspecification, is lesser a problem, for, the validity of inference is protected by the central limit theorem. However, the assumption of normality is generally required, so that the inference for the ratio of two variances can be carried out by the familiar F statistic. A parametric robust approach that is insensitive to the distributional assumption will be proposed here. More specifically, it will be demonstrated that the normal likelihood function can be adjusted for asymptotically valid inferences for all underlying distributions with finite fourth moments. The normal likelihood function, on the other hand, is itself robust for the comparison of two means so that no adjustment is needed.  相似文献   

7.
ABSTRACT

We derive the influence function of the likelihood ratio test statistic for multivariate normal sample. The derived influence function does not depend on the influence functions of the parameters under the null hypothesis. So we can obtain directly the empirical influence function with only the maximum likelihood estimators under the null hypothesis. Since the derived formula is a general form, it can be applied to influence analysis on many statistical testing problems.  相似文献   

8.
ABSTRACT

We consider semiparametric inference on the partially linearsingle-index model (PLSIM). The generalized likelihood ratio (GLR) test is proposed to examine whether or not a family of new semiparametric models fits adequately our given data in the PLSIM. A new GLR statistic is established to deal with the testing of the index parameter α0 in the PLSIM. The newly proposed statistic is shown to asymptotically follow a χ2-distribution with the scale constant and the degrees of freedom being independent of the nuisance parameters or function. Some finite sample simulations and a real example are used to illustrate our proposed methodology.  相似文献   

9.
ABSTRACT

The analysis of clustered data in a longitudinal ophthalmology study is complicated by correlations between repeatedly measured visual outcomes of paired eyes in a participant and missing observations due to the loss of follow-up. In the present article we consider hypothesis testing problems in an ophthalmology study, where eligible eyes are randomized to two treatments (when two eyes of a participant are eligible, the paired eyes are assigned to different treatments), and vision function outcomes are repeatedly measured over time. A large sample-based nonparametric test statistic and a nonparametric Bootstrap test analog are proposed for testing an interaction effect of two factors and testing an effect of a eye-specific factor within a level of the other person-specific factor on visual function outcomes. Both test statistics allow for missing observations, correlations between repeatedly measured outcomes on individual eyes, and correlations between repeatedly measured outcomes on both eyes of each participant. A simulation study shows that these proposed test statistics maintain nominal significance levels approximately and comparable powers to each other, as well as higher powers than the naive test statistic ignoring correlations between repeated bilateral measurements of both eyes in the same person. For illustration, we apply the proposed test statistics to the changes of visual field defect score in the Advanced Glaucoma Intervention Study.  相似文献   

10.
Through random cut‐points theory, the author extends inference for ordered categorical data to the unspecified continuum underlying the ordered categories. He shows that a random cut‐point Mann‐Whitney test yields slightly smaller p‐values than the conventional test for most data. However, when at least P% of the data lie in one of the k categories (with P = 80 for k = 2, P = 67 for k = 3,…, P = 18 for k = 30), he also shows that the conventional test can yield much smaller p‐values, and hence misleadingly liberal inference for the underlying continuum. The author derives formulas for exact tests; for k = 2, the Mann‐Whitney test is but a binomial test.  相似文献   

11.
ABSTRACT

We propose two non parametric portmanteau test statistics for serial dependence in high dimensions using the correlation integral. One test depends on a cutoff threshold value, while the other test is freed of this dependence. Although these tests may each be viewed as variants of the classical Brock, Dechert, and Scheinkman (BDS) test statistic, they avoid some of the major weaknesses of this test. We establish consistency and asymptotic normality of both portmanteau tests. Using Monte Carlo simulations, we investigate the small sample properties of the tests for a variety of data generating processes with normally and uniformly distributed innovations. We show that asymptotic theory provides accurate inference in finite samples and for relatively high dimensions. This is followed by a power comparison with the BDS test, and with several rank-based extensions of the BDS tests that have recently been proposed in the literature. Two real data examples are provided to illustrate the use of the test procedure.  相似文献   

12.
ABSTRACT

Correlated bilateral data arise from stratified studies involving paired body organs in a subject. When it is desirable to conduct inference on the scale of risk difference, one needs first to assess the assumption of homogeneity in risk differences across strata. For testing homogeneity of risk differences, we herein propose eight methods derived respectively from weighted-least-squares (WLS), the Mantel-Haenszel (MH) estimator, the WLS method in combination with inverse hyperbolic tangent transformation, and the test statistics based on their log-transformation, the modified Score test statistic and Likelihood ratio test statistic. Simulation results showed that four of the tests perform well in general, with the tests based on the WLS method and inverse hyperbolic tangent transformation always performing satisfactorily even under small sample size designs. The methods are illustrated with a dataset.  相似文献   

13.
Editor's Report     
There are two common methods for statistical inference on 2 × 2 contingency tables. One is the widely taught Pearson chi-square test, which uses the well-known χ2statistic. The chi-square test is appropriate for large sample inference, and it is equivalent to the Z-test that uses the difference between the two sample proportions for the 2 × 2 case. Another method is Fisher’s exact test, which evaluates the likelihood of each table with the same marginal totals. This article mathematically justifies that these two methods for determining extreme do not completely agree with each other. Our analysis obtains one-sided and two-sided conditions under which a disagreement in determining extreme between the two tests could occur. We also address the question whether or not their discrepancy in determining extreme would make them draw different conclusions when testing homogeneity or independence. Our examination of the two tests casts light on which test should be trusted when the two tests draw different conclusions.  相似文献   

14.
15.
This article deals with testing inference in the class of beta regression models with varying dispersion. We focus on inference in small samples. We perform a numerical analysis in order to evaluate the sizes and powers of different tests. We consider the likelihood ratio test, two adjusted likelihood ratio tests proposed by Ferrari and Pinheiro [Improved likelihood inference in beta regression, J. Stat. Comput. Simul. 81 (2011), pp. 431–443], the score test, the Wald test and bootstrap versions of the likelihood ratio, score and Wald tests. We perform tests on the parameters that index the mean submodel and also on the parameters in the linear predictor of the precision submodel. Overall, the numerical evidence favours the bootstrap tests. It is also shown that the score test is considerably less size-distorted than the likelihood ratio and Wald tests. An application that uses real (not simulated) data is presented and discussed.  相似文献   

16.
Most of the higher-order asymptotic results in statistical inference available in the literature assume model correctness. The aim of this paper is to develop higher-order results under model misspecification. The density functions to O(n?3/2) of the robust score test statistic and the robust Wald test statistic are derived under the null hypothesis, for the scalar as well as the multiparameter case. Alternate statistics which are robust to O(n?3/2) are also proposed.  相似文献   

17.
Abstract

Both Poisson and negative binomial regression can provide quasi-likelihood estimates for coefficients in exponential-mean models that are consistent in the presence of distributional misspecification. It has generally been recommended, however, that inference be carried out using asymptotically robust estimators for the parameter covariance matrix. As with linear models, such robust inference tends to lead to over-rejection of null hypotheses in small samples. Alternative methods for estimating coefficient estimator variances are considered. No one approach seems to remove all test bias, but the results do suggest that the use of the jackknife with Poisson regression tends to be least biased for inference.  相似文献   

18.
This paper presents a consistent Generalized Method of Moments (GMM) residuals-based test of functional form for time series models. By relating two moments we deliver a vector moment condition in which at least one element must be nonzero if the model is misspecified. The test will never fail to detect misspecification of any form for large samples, and is asymptotically chi-squared under the null, allowing for fast and simple inference. A simulation study reveals randomly selecting the nuisance parameter leads to more power than supremum-tests, and can obtain empirical power nearly equivalent to the most powerful test for even relatively small n.  相似文献   

19.
In this paper, the application of the intersection–union test method in fixed‐dose combination drug studies is discussed. An approximate sample size formula for the problem of testing the efficacy of a combination drug using intersection–union tests is proposed. The sample sizes obtained from the formula are found to be reasonably accurate in terms of attaining the target power 1?β for a specified β. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
Abstract

We propose a simple procedure based on an existing “debiased” l1-regularized method for inference of the average partial effects (APEs) in approximately sparse probit and fractional probit models with panel data, where the number of time periods is fixed and small relative to the number of cross-sectional observations. Our method is computationally simple and does not suffer from the incidental parameters problems that come from attempting to estimate as a parameter the unobserved heterogeneity for each cross-sectional unit. Furthermore, it is robust to arbitrary serial dependence in underlying idiosyncratic errors. Our theoretical results illustrate that inference concerning APEs is more challenging than inference about fixed and low-dimensional parameters, as the former concerns deriving the asymptotic normality for sample averages of linear functions of a potentially large set of components in our estimator when a series approximation for the conditional mean of the unobserved heterogeneity is considered. Insights on the applicability and implications of other existing Lasso-based inference procedures for our problem are provided. We apply the debiasing method to estimate the effects of spending on test pass rates. Our results show that spending has a positive and statistically significant average partial effect; moreover, the effect is comparable to found using standard parametric methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号