首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We consider small sample equivalence tests for exponentialy. Statistical inference in this setting is particularly challenging since equivalence testing procedures typically require much larger sample sizes, in comparison with classical “difference tests,” to perform well. We make use of Butler's marginal likelihood for the shape parameter of a gamma distribution in our development of small sample equivalence tests for exponentiality. We consider two procedures using the principle of confidence interval inclusion, four Bayesian methods, and the uniformly most powerful unbiased (UMPU) test where a saddlepoint approximation to the intractable distribution of a canonical sufficient statistic is used. We perform small sample simulation studies to assess the bias of our various tests and show that all of the Bayes posteriors we consider are integrable. Our simulation studies show that the saddlepoint-approximated UMPU method performs remarkably well for small sample sizes and is the only method that consistently exhibits an empirical significance level close to the nominal 5% level.  相似文献   

2.
We investigate the behavior of the well-known Hylleberg, Engle, Granger and Yoo (HEGY) regression-based seasonal unit root tests in cases where the driving shocks can display periodic nonstationary volatility and conditional heteroskedasticity. Our set up allows for periodic heteroskedasticity, nonstationary volatility and (seasonal) generalized autoregressive-conditional heteroskedasticity as special cases. We show that the limiting null distributions of the HEGY tests depend, in general, on nuisance parameters which derive from the underlying volatility process. Monte Carlo simulations show that the standard HEGY tests can be substantially oversized in the presence of such effects. As a consequence, we propose wild bootstrap implementations of the HEGY tests. Two possible wild bootstrap resampling schemes are discussed, both of which are shown to deliver asymptotically pivotal inference under our general conditions on the shocks. Simulation evidence is presented which suggests that our proposed bootstrap tests perform well in practice, largely correcting the size problems seen with the standard HEGY tests even under extreme patterns of heteroskedasticity, yet not losing finite sample relative to the standard HEGY tests.  相似文献   

3.
Since the publication of the seminal paper by Cox (1972), proportional hazard model has become very popular in regression analysis for right censored data. In observational studies, treatment assignment may depend on observed covariates. If these confounding variables are not accounted for properly, the inference based on the Cox proportional hazard model may perform poorly. As shown in Rosenbaum and Rubin (1983), under the strongly ignorable treatment assignment assumption, conditioning on the propensity score yields valid causal effect estimates. Therefore we incorporate the propensity score into the Cox model for causal inference with survival data. We derive the asymptotic property of the maximum partial likelihood estimator when the model is correctly specified. Simulation results show that our method performs quite well for observational data. The approach is applied to a real dataset on the time of readmission of trauma patients. We also derive the asymptotic property of the maximum partial likelihood estimator with a robust variance estimator, when the model is incorrectly specified.  相似文献   

4.
In this article, we consider inference about the correlation coefficients of several bivariate normal distributions. We first propose computational approach tests for testing the equality of the correlation coefficients. In fact, these approaches are parametric bootstrap tests, and simulation studies show that they perform very satisfactory, and the actual sizes of these tests are better than other existing approaches. We also present a computational approach test and a parametric bootstrap confidence interval for inference about the parameter of common correlation coefficient. At the end, all the approaches are illustrated using two real examples.  相似文献   

5.
In reliability analysis, accelerated life-testing allows for gradual increment of stress levels on test units during an experiment. In a special class of accelerated life tests known as step-stress tests, the stress levels increase discretely at pre-fixed time points, and this allows the experimenter to obtain information on the parameters of the lifetime distributions more quickly than under normal operating conditions. Moreover, when a test unit fails, there are often more than one fatal cause for the failure, such as mechanical or electrical. In this article, we consider the simple step-stress model under Type-II censoring when the lifetime distributions of the different risk factors are independently exponentially distributed. Under this setup, we derive the maximum likelihood estimators (MLEs) of the unknown mean parameters of the different causes under the assumption of a cumulative exposure model. The exact distributions of the MLEs of the parameters are then derived through the use of conditional moment generating functions. Using these exact distributions as well as the asymptotic distributions and the parametric bootstrap method, we discuss the construction of confidence intervals for the parameters and assess their performance through Monte Carlo simulations. Finally, we illustrate the methods of inference discussed here with an example.  相似文献   

6.
In seasonal influenza epidemics, pathogens such as respiratory syncytial virus (RSV) often co-circulate with influenza and cause influenza-like illness (ILI) in human hosts. However, it is often impractical to test for each potential pathogen or to collect specimens for each observed ILI episode, making inference about influenza transmission difficult. In the setting of infectious diseases, missing outcomes impose a particular challenge because of the dependence among individuals. We propose a Bayesian competing-risk model for multiple co-circulating pathogens for inference on transmissibility and intervention efficacies under the assumption that missingness in the biological confirmation of the pathogen is ignorable. Simulation studies indicate a reasonable performance of the proposed model even if the number of potential pathogens is misspecified. They also show that a moderate amount of missing laboratory test results has only a small impact on inference about key parameters in the setting of close contact groups. Using the proposed model, we found that a non-pharmaceutical intervention is marginally protective against transmission of influenza A in a study conducted in elementary schools.  相似文献   

7.
Fast and robust bootstrap   总被引:1,自引:0,他引:1  
In this paper we review recent developments on a bootstrap method for robust estimators which is computationally faster and more resistant to outliers than the classical bootstrap. This fast and robust bootstrap method is, under reasonable regularity conditions, asymptotically consistent. We describe the method in general and then consider its application to perform inference based on robust estimators for the linear regression and multivariate location-scatter models. In particular, we study confidence and prediction intervals and tests of hypotheses for linear regression models, inference for location-scatter parameters and principal components, and classification error estimation for discriminant analysis.  相似文献   

8.
This article deals with testing inference in the class of beta regression models with varying dispersion. We focus on inference in small samples. We perform a numerical analysis in order to evaluate the sizes and powers of different tests. We consider the likelihood ratio test, two adjusted likelihood ratio tests proposed by Ferrari and Pinheiro [Improved likelihood inference in beta regression, J. Stat. Comput. Simul. 81 (2011), pp. 431–443], the score test, the Wald test and bootstrap versions of the likelihood ratio, score and Wald tests. We perform tests on the parameters that index the mean submodel and also on the parameters in the linear predictor of the precision submodel. Overall, the numerical evidence favours the bootstrap tests. It is also shown that the score test is considerably less size-distorted than the likelihood ratio and Wald tests. An application that uses real (not simulated) data is presented and discussed.  相似文献   

9.
For a censored two-sample problem, Chen and Wang [Y.Q. Chen and M.-C. Wang, Analysis of accelerated hazards models, J. Am. Statist. Assoc. 95 (2000), pp. 608–618] introduced the accelerated hazards model. The scale-change parameter in this model characterizes the association of two groups. However, its estimator involves the unknown density in the asymptotic variance. Thus, to make an inference on the parameter, numerically intensive methods are needed. The goal of this article is to propose a simple estimation method in which estimators are asymptotically normal with a density-free asymptotic variance. Some lack-of-fit tests are also obtained from this. These tests are related to Gill–Schumacher type tests [R.D. Gill and M. Schumacher, A simple test of the proportional hazards assumption, Biometrika 74 (1987), pp. 289–300] in which the estimating functions are evaluated at two different weight functions yielding two estimators that are close to each other. Numerical studies show that for some weight functions, the estimators and tests perform well. The proposed procedures are illustrated in two applications.  相似文献   

10.
We develop and evaluate the validity and power of two specific tests for the transition probabilities in a Markov chain estimated from aggregate frequency data. The two null hypotheses considered are (1) constancy of the diagonal elements of the one-step transition probability matrix and (2) an arbitrarily chosen transition probability’s being equal to a specific value. The formation of tests uses a general framework for statistical inference on estimated Markov processes; we also indicate how this framework can be used to form tests for a variety of other hypotheses. The validity and power performance of the two tests formed in this paper are examined in factorially designed Monte Carlo experiments. The results indicate that the proposed tests lead to type I error probabilities which are close to the desired levels and to high power against even small deviations from the null hypotheses considered.  相似文献   

11.
Consistency of propensity score matching estimators hinges on the propensity score's ability to balance the distributions of covariates in the pools of treated and non-treated units. Conventional balance tests merely check for differences in covariates’ means, but cannot account for differences in higher moments. For this reason, this paper proposes balance tests which test for differences in the entire distributions of continuous covariates based on quantile regression (to derive Kolmogorov–Smirnov and Cramer–von-Mises–Smirnov-type test statistics) and resampling methods (for inference). Simulations suggest that these methods are very powerful and capture imbalances related to higher moments when conventional balance tests fail to do so.  相似文献   

12.
Split-plot design may be refer to a common experimental setting where a particular type of restricted randomization has occurred during a planned experiment. The aim of this article is to suggest a new method to perform inference on split-plot experiments by combination-based permutation tests. This novel nonparametric approach has been studied and validated using a Monte Carlo simulation study where we compared it with the parametric and nonparametric procedures proposed in the literature. Results suggest that in each experimental situation where normality is hard to justify and especially when errors have heavy-tailed distribution, the proposed nonparametric procedure can be considered as a valid solution.  相似文献   

13.
High-dimensional predictive models, those with more measurements than observations, require regularization to be well defined, perform well empirically, and possess theoretical guarantees. The amount of regularization, often determined by tuning parameters, is integral to achieving good performance. One can choose the tuning parameter in a variety of ways, such as through resampling methods or generalized information criteria. However, the theory supporting many regularized procedures relies on an estimate for the variance parameter, which is complicated in high dimensions. We develop a suite of information criteria for choosing the tuning parameter in lasso regression by leveraging the literature on high-dimensional variance estimation. We derive intuition showing that existing information-theoretic approaches work poorly in this setting. We compare our risk estimators to existing methods with an extensive simulation and derive some theoretical justification. We find that our new estimators perform well across a wide range of simulation conditions and evaluation criteria.  相似文献   

14.
In this paper, we consider the setting where the observed data is incomplete. For the general situation where the number of gaps as well as the number of unobserved values in some gaps go to infinity, the asymptotic behavior of maximum likelihood estimator is not clear. We derive and investigate the asymptotic properties of maximum likelihood estimator under censorship and drive a statistic for testing the null hypothesis that the proposed non-nested models are equally close to the true model against the alternative hypothesis that one model is closer when we are faced with a life-time situation. Furthermore rewrite a normalization of a difference of Akaike criterion for estimating the difference of expected Kullback–Leibler risk between the distributions in two different models.  相似文献   

15.
The assumption of serial independence of disturbances is the starting point of most of the work done on analyzing market disequilibrium models. We derive tests for serial dependence given normality and homoscedasticity using the Lagrange multiplier (LM) test principle. Although the likelihood function under serial dependence is very complicated and involves multiple integrals of dimensions equal to the sample size, the test statistic we obtain through the LM principle is very simple. We apply the test to the housing-start data of Fair and Jaffee (1972) and study its finite sample properties through simulation. The test seems to perform quite well in finite samples in terms of size and power. We present an analysis of disequilibrium models that assumes that the disturbances are logistic rather than normal. The relative performances of these distributions are investigated by simulation.  相似文献   

16.
ABSTRACT

Correlated bilateral data arise from stratified studies involving paired body organs in a subject. When it is desirable to conduct inference on the scale of risk difference, one needs first to assess the assumption of homogeneity in risk differences across strata. For testing homogeneity of risk differences, we herein propose eight methods derived respectively from weighted-least-squares (WLS), the Mantel-Haenszel (MH) estimator, the WLS method in combination with inverse hyperbolic tangent transformation, and the test statistics based on their log-transformation, the modified Score test statistic and Likelihood ratio test statistic. Simulation results showed that four of the tests perform well in general, with the tests based on the WLS method and inverse hyperbolic tangent transformation always performing satisfactorily even under small sample size designs. The methods are illustrated with a dataset.  相似文献   

17.
The computational demand required to perform inference using Markov chain Monte Carlo methods often obstructs a Bayesian analysis. This may be a result of large datasets, complex dependence structures, or expensive computer models. In these instances, the posterior distribution is replaced by a computationally tractable approximation, and inference is based on this working model. However, the error that is introduced by this practice is not well studied. In this paper, we propose a methodology that allows one to examine the impact on statistical inference by quantifying the discrepancy between the intractable and working posterior distributions. This work provides a structure to analyse model approximations with regard to the reliability of inference and computational efficiency. We illustrate our approach through a spatial analysis of yearly total precipitation anomalies where covariance tapering approximations are used to alleviate the computational demand associated with inverting a large, dense covariance matrix.  相似文献   

18.
Simultaneous inference allows for the exploration of data while deciding on criteria for proclaiming discoveries. It was recently proved that all admissible post hoc inference methods for the true discoveries must employ closed testing. In this paper, we investigate efficient closed testing with local tests of a special form: thresholding a function of sums of test scores for the individual hypotheses. Under this special design, we propose a new statistic that quantifies the cost of multiplicity adjustments, and we develop fast (mostly linear-time) algorithms for post hoc inference. Paired with recent advances in global null tests based on generalized means, our work instantiates a series of simultaneous inference methods that can handle many dependence structures and signal compositions. We provide guidance on the method choices via theoretical investigation of the conservativeness and sensitivity for different local tests, as well as simulations that find analogous behavior for local tests and full closed testing.  相似文献   

19.
In the present article, we develop some asymptotically power on partially sequential nonparametric tests for monitoring structural changes. Our test procedures are based on Wilcoxon score. We use the idea of curved stopping boundaries. We derive some exact results and perform simulation studies to provide various properties of the tests. We see that one of the proposed procedures significantly controls the Type I error rate. This procedure may be very effective for fluctuation monitoring. We illustrate the procedures by using real life data from the stock market.  相似文献   

20.
In this paper, we seek to establish asymptotic results for selective inference procedures removing the assumption of Gaussianity. The class of selection procedures we consider are determined by affine inequalities, which we refer to as affine selection procedures. Examples of affine selection procedures include selective inference along the solution path of the least absolute shrinkage and selection operator (LASSO), as well as selective inference after fitting the least absolute shrinkage and selection operator at a fixed value of the regularization parameter. We also consider some tests in penalized generalized linear models. Our result proves asymptotic convergence in the high‐dimensional setting where n<p, and n can be of a logarithmic factor of the dimension p for some procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号