首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
We consider testing of the significance of the coefficients in the linear model. Unlike in the classical approach, there is no alternative hypothesis to accept when the null hypothesis is rejected. When there is a substantial deviation from the null hypothesis, we reject the null hypothesis and identify based on data alternative hypotheses associated with the independent variables or the levels that contributed most towards the deviation from the null hypothesis.  相似文献   

2.
Current methods of testing the equality of conditional correlations of bivariate data on a third variable of interest (covariate) are limited due to discretizing of the covariate when it is continuous. In this study, we propose a linear model approach for estimation and hypothesis testing of the Pearson correlation coefficient, where the correlation itself can be modeled as a function of continuous covariates. The restricted maximum likelihood method is applied for parameter estimation, and the corrected likelihood ratio test is performed for hypothesis testing. This approach allows for flexible and robust inference and prediction of the conditional correlations based on the linear model. Simulation studies show that the proposed method is statistically more powerful and more flexible in accommodating complex covariate patterns than the existing methods. In addition, we illustrate the approach by analyzing the correlation between the physical component summary and the mental component summary of the MOS SF-36 form across a fair number of covariates in the national survey data.  相似文献   

3.
Recommended methods for analyzing unbalanced two-way data may be classified into two major categories:the parametric interpretation approach and the model comparison approach. Each approach has its advantages and its drawbacks. The main drawback of the parametric interpretation approach is non-orthogonality.For the model comparison approach the main drawback is the dependence of the hypothesis tested on the cell sizes. In this paper we provide examples to illustrate these drawbacks.  相似文献   

4.
In this paper we introduce a parametric model for handling lifetime data where an early lifetime can be related to the infant-mortality failure or to the wear processes but we do not know which risk is responsible for the failure. The maximum likelihood approach and the sampling-based approach are used to get the inferences of interest. Some special cases of the proposed model are studied via Monte Carlo methods for size and power of hypothesis tests. To illustrate the proposed methodology, we introduce an example consisting of a real data set.  相似文献   

5.
In this paper we obtain several influence measures for the multivariate linear general model through the approach proposed by Muñoz-Pichardo et al. (1995), which is based on the concept of conditional bias. An interesting charasteristic of this approach is that it does not require any distributional hypothesis. Appling the obtained results to the multivariate regression model, we obtain some measures proposed by other authors. Nevertheless, on the results obtained in this paper, we emphasize two aspects. First, they provide a theoretical foundation for measures proposed by other authors for the mul¬tivariate regression model. Second, they can be applied to any linear model that can be formulated as a particular case of the multivariate linear general model. In particular, we carry out an application to the multivariate analysis of covariance.  相似文献   

6.
In this paper, we suggest a Bayesian panel (longitudinal) data approach to test for the economic growth convergence hypothesis. This approach can control for possible effects of initial income conditions, observed covariates and cross-sectional correlation of unobserved common error terms on inference procedures about the unit root hypothesis based on panel data dynamic models. Ignoring these effects can lead to spurious evidence supporting economic growth divergence. The application of our suggested approach to real gross domestic product panel data of the G7 countries indicates that the economic growth convergence hypothesis is supported by the data. Our empirical analysis shows that evidence of economic growth divergence for the G7 countries can be attributed to not accounting for the presence of exogenous covariates in the model.  相似文献   

7.
We propose a test for state dependence in binary panel data with individual covariates. For this aim, we rely on a quadratic exponential model in which the association between the response variables is accounted for in a different way with respect to more standard formulations. The level of association is measured by a single parameter that may be estimated by a Conditional Maximum Likelihood (CML) approach. Under the dynamic logit model, the conditional estimator of this parameter converges to zero when the hypothesis of absence of state dependence is true. Therefore, it is possible to implement a t-test for this hypothesis which may be very simply performed and attains the nominal significance level under several structures of the individual covariates. Through an extensive simulation study, we find that our test has good finite sample properties and it is more robust to the presence of (autocorrelated) covariates in the model specification in comparison with other existing testing procedures for state dependence. The proposed approach is illustrated by two empirical applications: the first is based on data coming from the Panel Study of Income Dynamics and concerns employment and fertility; the second is based on the Health and Retirement Study and concerns the self reported health status.  相似文献   

8.
Recently, the field of multiple hypothesis testing has experienced a great expansion, basically because of the new methods developed in the field of genomics. These new methods allow scientists to simultaneously process thousands of hypothesis tests. The frequentist approach to this problem is made by using different testing error measures that allow to control the Type I error rate at a certain desired level. Alternatively, in this article, a Bayesian hierarchical model based on mixture distributions and an empirical Bayes approach are proposed in order to produce a list of rejected hypotheses that will be declared significant and interesting for a more detailed posterior analysis. In particular, we develop a straightforward implementation of a Gibbs sampling scheme where all the conditional posterior distributions are explicit. The results are compared with the frequentist False Discovery Rate (FDR) methodology. Simulation examples show that our model improves the FDR procedure in the sense that it diminishes the percentage of false negatives keeping an acceptable percentage of false positives.  相似文献   

9.
This paper describes an approach for calculating sample size for population pharmacokinetic experiments that involve hypothesis testing based on multi‐group comparison detecting the difference in parameters between groups under mixed‐effects modelling. This approach extends what has been described for generalized linear models and nonlinear population pharmacokinetic models that involve only binary covariates to more complex nonlinear population pharmacokinetic models. The structural nonlinear model is linearized around the random effects to obtain the marginal model and the hypothesis testing involving model parameters is based on Wald's test. This approach provides an efficient and fast method for calculating sample size for hypothesis testing in population pharmacokinetic models. The approach can also handle different design problems such as unequal allocation of subjects to groups and unbalanced sampling times between and within groups. The results obtained following application to a one compartment intravenous bolus dose model that involved three different hypotheses under different scenarios showed good agreement between the power obtained from NONMEM simulations and nominal power. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper we evaluate the performance of three methods for testing the existence of a unit root in a time series, when the models under consideration in the null hypothesis do not display autocorrelation in the error term. In such cases, simple versions of the Dickey-Fuller test should be used as the most appropriate ones instead of the known augmented Dickey-Fuller or Phillips-Perron tests. Through Monte Carlo simulations we show that, apart from a few cases, testing the existence of a unit root we obtain actual type I error and power very close to their nominal levels. Additionally, when the random walk null hypothesis is true, by gradually increasing the sample size, we observe that p-values for the drift in the unrestricted model fluctuate at low levels with small variance and the Durbin-Watson (DW) statistic is approaching 2 in both the unrestricted and restricted models. If, however, the null hypothesis of a random walk is false, taking a larger sample, the DW statistic in the restricted model starts to deviate from 2 while in the unrestricted model it continues to approach 2. It is also shown that the probability not to reject that the errors are uncorrelated, when they are indeed not correlated, is higher when the DW test is applied at 1% nominal level of significance.  相似文献   

11.
We propose the penalized empirical likelihood method via bridge estimator in Cox's proportional hazard model for parameter estimation and variable selection. Under reasonable conditions, we show that penalized empirical likelihood in Cox's proportional hazard model has oracle property. A penalized empirical likelihood ratio for the vector of regression coefficients is defined and its limiting distribution is a chi-square distributions. The advantage of penalized empirical likelihood as a nonparametric likelihood approach is illustrated in testing hypothesis and constructing confidence sets. The method is illustrated by extensive simulation studies and a real example.  相似文献   

12.
The comparison amongmproportions can be viewed as the clustering of the means of Bernoulli trials. By introducing a distribution which is supported on the means of Bernoulli trials, we suggest a moment method approach to determine the center of the clusters. We also suggest using model selection criteria rather than the usual testing hypothesis approach to determine the grouping of the means. The discrepancy function for all possible models are compared based on the bootstrap results.  相似文献   

13.
This paper presents a Bayesian-hypothesis-testing-based methodology for model validation and confidence extrapolation under uncertainty, using limited test data. An explicit expression of the Bayes factor is derived for the interval hypothesis testing. The interval method is compared with the Bayesian point null hypothesis testing approach. The Bayesian network with Markov Chain Monte Carlo simulation and Gibbs sampling is explored for extrapolating the inference from the validated domain at the component level to the untested domain at the system level. The effect of the number of experiments on the confidence in the model validation decision is investigated. The probabilities of Type I and Type II errors in decision-making during the model validation and confidence extrapolation are quantified. The proposed methodologies are applied to a structural mechanics problem. Numerical results demonstrate that the Bayesian methodology provides a quantitative approach to facilitate rational decisions in model validation and confidence extrapolation under uncertainty.  相似文献   

14.
A general approach to analysis of fixed effects models through a reparameterization of the cell means model is outlined. After applying a Q-op3rator, distribution theory for the model is developed. Methods of estimation and hypothesis testing are given and the exact form of each hypothesis is shown. The implementation of these results in a currently available computer package is discussed.  相似文献   

15.
Two types of state-switching models for U.S. real output have been proposed: models that switch randomly between states and models that switch states deterministically, as in the threshold autoregressive model of Potter. These models have been justified primarily on how well they fit the sample data, yielding statistically significant estimates of the model coefficients. Here we propose a new approach to the evaluation of an estimated nonlinear time series model that provides a complement to existing methods based on in-sample fit or on out-of-sample forecasting. In this new approach, a battery of distinct nonlinearity tests is applied to the sample data, resulting in a set of p-values for rejecting the null hypothesis of a linear generating mechanism. This set of p-values is taken to be a “stylized fact” characterizing the nonlinear serial dependence in the generating mechanism of the time series. The effectiveness of an estimated nonlinear model for this time series is then evaluated in terms of the congruence between this stylized fact and a set of nonlinearity test results obtained from data simulated using the estimated model. In particular, we derive a portmanteau statistic based on this set of nonlinearity test p-values that allows us to test the proposition that a given model adequately captures the nonlinear serial dependence in the sample data. We apply the method to several estimated state-switching models of U.S. real output.  相似文献   

16.
Time series within fields such as finance and economics are often modelled using long memory processes. Alternative studies on the same data can suggest that series may actually contain a ‘changepoint’ (a point within the time series where the data generating process has changed). These models have been shown to have elements of similarity, such as within their spectrum. Without prior knowledge this leads to an ambiguity between these two models, meaning it is difficult to assess which model is most appropriate. We demonstrate that considering this problem in a time-varying environment using the time-varying spectrum removes this ambiguity. Using the wavelet spectrum, we then use a classification approach to determine the most appropriate model (long memory or changepoint). Simulation results are presented across a number of models followed by an application to stock cross-correlations and US inflation. The results indicate that the proposed classification outperforms an existing hypothesis testing approach on a number of models and performs comparatively across others.  相似文献   

17.
Covariate adjustment for the estimation of treatment effect for randomized controlled trials (RCT) is a simple approach with a long history, hence, its pros and cons have been well‐investigated and published in the literature. It is worthwhile to revisit this topic since recently there has been significant investigation and development on model assumptions, robustness to model mis‐specification, in particular, regarding the Neyman‐Rubin model and the average treatment effect estimand. This paper discusses key results of the investigation and development and their practical implication on pharmaceutical statistics. Accordingly, we recommend that appropriate covariate adjustment should be more widely used for RCTs for both hypothesis testing and estimation.  相似文献   

18.
In this article we deal with simultaneous two-sided tolerance intervals for a univariate linear regression model with independent normally distributed errors. We present a method for determining the intervals derived by the general confidence-set approach (GCSA), i.e. the intervals are constructed based on a specified confidence set for unknown parameters of the model. The confidence set used in the new method is formed based on a suggested hypothesis test about all parameters of the model. The simultaneous two-sided tolerance intervals determined by the presented method are found to be efficient and fast to compute based on a preliminary numerical comparison of all the existing methods based on GCSA.  相似文献   

19.
This paper introduces a novel sequential approach for online surveillance of the equal predictive ability (EPA) hypothesis presumed to hold for many competing forecasting models. A nonparametric control chart is suggested for providing a decision at every new time point as to whether the EPA hypothesis remains valid. The detection ability of our procedure is evaluated in a Monte Carlo simulation study for various types of deviations from the EPA hypothesis. Our approach enables the quick detection of various shift types, is parsimonious, and robust to misspecifications. Based on these results, we formulate practical recommendations for procedure design.  相似文献   

20.
Multiple Hypotheses Testing with Weights   总被引:2,自引:0,他引:2  
In this paper we offer a multiplicity of approaches and procedures for multiple testing problems with weights. Some rationale for incorporating weights in multiple hypotheses testing are discussed. Various type-I error-rates and different possible formulations are considered, for both the intersection hypothesis testing and the multiple hypotheses testing problems. An optimal per family weighted error-rate controlling procedure a la Spjotvoll (1972) is obtained. This model serves as a vehicle for demonstrating the different implications of the approaches to weighting. Alternative approach es to that of Holm (1979) for family-wise error-rate control with weights are discussed, one involving an alternative procedure for family-wise error-rate control, and the other involving the control of a weighted family-wise error-rate. Extensions and modifications of the procedures based on Simes (1986) are given. These include a test of the overall intersec tion hypothesis with general weights, and weighted sequentially rejective procedures for testing the individual hypotheses. The false discovery rate controlling approach and procedure of Benjamini & Hochberg (1995) are extended to allow for different weights.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号