首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Current methods of testing the equality of conditional correlations of bivariate data on a third variable of interest (covariate) are limited due to discretizing of the covariate when it is continuous. In this study, we propose a linear model approach for estimation and hypothesis testing of the Pearson correlation coefficient, where the correlation itself can be modeled as a function of continuous covariates. The restricted maximum likelihood method is applied for parameter estimation, and the corrected likelihood ratio test is performed for hypothesis testing. This approach allows for flexible and robust inference and prediction of the conditional correlations based on the linear model. Simulation studies show that the proposed method is statistically more powerful and more flexible in accommodating complex covariate patterns than the existing methods. In addition, we illustrate the approach by analyzing the correlation between the physical component summary and the mental component summary of the MOS SF-36 form across a fair number of covariates in the national survey data.  相似文献   

2.
In this article, a Bayesian approach is proposed for the estimation of log odds ratios and intraclass correlations over a two-way contingency table, including intraclass correlated cells. Required likelihood functions of log odds ratios are obtained, and determination of prior structures is discussed. Hypothesis testing for log odds ratios and intraclass correlations by using the posterior simulations is outlined. Because the proposed approach includes no asymptotic theory, it is useful for the estimation and hypothesis testing of log odds ratios in the presence of certain intraclass correlation patterns. A family health status and limitations data set is analyzed by using the proposed approach in order to figure out the impact of intraclass correlations on the estimates and hypothesis tests of log odds ratios. Although intraclass correlations are small in the data set, we obtain that even small intraclass correlations can significantly affect the estimates and test results, and our approach is useful for the estimation and testing of log odds ratios in the presence of intraclass correlations.  相似文献   

3.
The score function is associated with some optimality features in statistical inference. This review article looks on the central role of the score in testing and estimation. The maximization of the power in testing and the quest for efficiency in estimation lead to score as a guiding principle. In hypothesis testing, the locally most powerful test statistic is the score test or a transformation of it. In estimation, the optimal estimating function is the score. The same link can be made in the case of nuisance parameters: the optimal test function should have maximum correlation with the score of the parameter of primary interest. We complement this result by showing that the same criterion should be satisfied in the estimation problem as well.  相似文献   

4.
This paper considers the development of inferential techniques based on the generalized variable method (GV-Method) for the location parameter of the general half-normal distribution. We are interested in hypothesis testing of, and interval estimation for, the location parameter. Body fat data, urinary excretion rate data, and simulated data are used to illustrate the application and to demonstrate the advantages of the proposed GV-Method over the large-sample method and the Bayesian method.  相似文献   

5.
In the Bayesian approach, the Behrens–Fisher problem has been posed as one of estimation for the difference of two means. No Bayesian solution to the Behrens–Fisher testing problem has yet been given due, perhaps, to the fact that the conventional priors used are improper. While default Bayesian analysis can be carried out for estimation purposes, it poses difficulties for testing problems. This paper generates sensible intrinsic and fractional prior distributions for the Behrens–Fisher testing problem from the improper priors commonly used for estimation. It allows us to compute the Bayes factor to compare the null and the alternative hypotheses. This default procedure of model selection is compared with a frequentist test and the Bayesian information criterion. We find discrepancy in the sense that frequentist and Bayesian information criterion reject the null hypothesis for data, that the Bayes factor for intrinsic or fractional priors do not.  相似文献   

6.
This paper develops a method for estimating the parameters of a vector autoregression (VAR) observed in white noise. The estimation method assumes that the noise variance matrix is known and does not require any iterative process. This study provides consistent estimators and the asymptotic distribution of the parameters required for conducting tests of Granger causality. Methods in the existing statistical literature cannot be used for testing Granger causality, since under the null hypothesis the model becomes unidentifiable. Measurement error effects on parameter estimates were evaluated by using computational simulations. The results suggest that the proposed approach produces empirical false positive rates close to the adopted nominal level (even for small samples) and has a satisfactory performance around the null hypothesis. The applicability and usefulness of the proposed approach are illustrated using a functional magnetic resonance imaging dataset.  相似文献   

7.
This article proposes a test to determine whether “big data” nowcasting methods, which have become an important tool to many public and private institutions, are monotonically improving as new information becomes available. The test is the first to formalize existing evaluation procedures from the nowcasting literature. We place particular emphasis on models involving estimated factors, since factor-based methods are a leading case in the high-dimensional empirical nowcasting literature, although our test is still applicable to small-dimensional set-ups like bridge equations and MIDAS models. Our approach extends a recent methodology for testing many moment inequalities to the case of nowcast monotonicity testing, which allows the number of inequalities to grow with the sample size. We provide results showing the conditions under which both parameter estimation error and factor estimation error can be accommodated in this high-dimensional setting when using the pseudo out-of-sample approach. The finite sample performance of our test is illustrated using a wide range of Monte Carlo simulations, and we conclude with an empirical application of nowcasting U.S. real gross domestic product (GDP) growth and five GDP sub-components. Our test results confirm monotonicity for all but one sub-component (government spending), suggesting that the factor-augmented model may be misspecified for this GDP constituent. Supplementary materials for this article are available online.  相似文献   

8.
In sequential studies, formal interim analyses are usually restricted to a consideration of a single null hypothesis concerning a single parameter of interest. Valid frequentist methods of hypothesis testing and of point and interval estimation for the primary parameter have already been devised for use at the end of such a study. However, the completed data set may warrant a more detailed analysis, involving the estimation of parameters corresponding to effects that were not used to determine when to stop, and yet correlated with those that were. This paper describes methods for setting confidence intervals for secondary parameters in a way which provides the correct coverage probability in repeated frequentist realizations of the sequential design used. The method assumes that information accumulates on the primary and secondary parameters at proportional rates. This requirement will be valid in many potential applications, but only in limited situations in survival analysis.  相似文献   

9.
截面时序分析—模型选择与参数估计   总被引:4,自引:2,他引:2       下载免费PDF全文
杨东升 《统计研究》1999,16(1):46-50
截面时序模型是近三十年来发展起来的,文献[2][3]描述了截面时序模型的一般理论。尽管这一模型在国外已得到了比较广泛深入的研究和应用,但尚未引起国内统计界和数量经济分析界人士的重视。本文从进行实证经济分析所需要的角度出发,对模型的选择、参数估计和假设...  相似文献   

10.
This article examines a semiparametric test for checking the constancy of serial dependence via copula models for Markov time series. A semiparametric score test is proposed for testing the constancy of the copula parameter against stochastically varying copula parameter. The asymptotic null distribution of the test is established. A semiparametric bootstrap procedure is employed for the estimation of the variance of the proposed score test. Illustrations are given based on simulated series and historic interest rate data.  相似文献   

11.
A unified approach is developed for testing hypotheses in the general linear model based on the ranks of the residuals. It complements the nonparametric estimation procedures recently reported in the literature. The testing and estimation procedures together provide a robust alternative to least squares. The methods are similar in spirit to least squares so that results are simple to interpret. Hypotheses concerning a subset of specified parameters can be tested, while the remaining parameters are treated as nuisance parameters. Asymptotically, the test statistic is shown to have a chi-square distribution under the null hypothesis. This result is then extended to cover a sequence of contiguous alternatives from which the Pitman efficacy is derived. The general application of the test requires the consistent estimation of a functional of the underlying distribution and one such estimate is furnished.  相似文献   

12.
Methods for interval estimation and hypothesis testing about the ratio of two independent inverse Gaussian (IG) means based on the concept of generalized variable approach are proposed. As assessed by simulation, the coverage probabilities of the proposed approach are found to be very close to the nominal level even for small samples. The proposed new approaches are conceptually simple and are easy to use. Similar procedures are developed for constructing confidence intervals and hypothesis testing about the difference between two independent IG means. Monte Carlo comparison studies show that the results based on the generalized variable approach are as good as those based on the modified likelihood ratio test. The methods are illustrated using two examples.  相似文献   

13.
This paper considers nonlinear regression models when neither the response variable nor the covariates can be directly observed, but are measured with both multiplicative and additive distortion measurement errors. We propose conditional variance and conditional mean calibration estimation methods for the unobserved variables, then a nonlinear least squares estimator is proposed. For the hypothesis testing of parameter, a restricted estimator under the null hypothesis and a test statistic are proposed. The asymptotic properties for the estimator and test statistic are established. Lastly, a residual-based empirical process test statistic marked by proper functions of the regressors is proposed for the model checking problem. We further suggest a bootstrap procedure to calculate critical values. Simulation studies demonstrate the performance of the proposed procedure and a real example is analysed to illustrate its practical usage.  相似文献   

14.
The statistical inference problem on effect size indices is addressed using a series of independent two-armed experiments from k arbitrary populations. The effect size parameter simply quantifies the difference between two groups. It is a meaningful index to be used when data are measured on different scales. In the context of bivariate statistical models, we define estimators of the effect size indices and propose large sample testing procedures to test the homogeneity of these indices. The null and non-null distributions of the proposed testing procedures are derived and their performance is evaluated via Monte Carlo simulation. Further, three types of interval estimation of the proposed indices are considered for both combined and uncombined data. Lower and upper confidence limits for the actual effect size indices are obtained and compared via bootstrapping. It is found that the length of the intervals based on the combined effect size estimator are almost half the length of the intervals based on the uncombined effect size estimators. Finally, we illustrate the proposed procedures for hypothesis testing and interval estimation using a real data set.  相似文献   

15.
This article considers an approach to estimating and testing a new Kronecker product covariance structure for three-level (multiple time points (p), multiple sites (u), and multiple response variables (q)) multivariate data. Testing of such covariance structure is potentially important for high dimensional multi-level multivariate data. The hypothesis testing procedure developed in this article can not only test the hypothesis for three-level multivariate data, but also can test many different hypotheses, such as blocked compound symmetry, for two-level multivariate data as special cases. The tests are implemented with two real data sets.  相似文献   

16.
Trimmed samples are commonly used in several branches of statistical methodology, especially when the presence of contaminated data is suspected. Assuming that certain proportions of the smallest and largest observations from a Weibull sample are unknown or have been eliminated, a Bayesian approach to point and interval estimation of the scale parameter, as well as hypothesis testing and prediction, is presented. In many cases, the use of substantial prior information can significantly increase the quality of the inferences and reduce the amount of testing required. Some Bayes estimators and predictors are derived in closed-forms. Highest posterior density estimators and credibility intervals can be computed using iterative methods. Bayes rules for testing one- and two-sided hypotheses are also provided. An illustrative numerical example is included.  相似文献   

17.
Assessment of the time needed to attain steady state is a key pharmacokinetic objective during drug development. Traditional approaches for assessing steady state include ANOVA‐based methods for comparing mean plasma concentration values from each sampling day, with either a difference or equivalence test. However, hypothesis‐testing approaches are ill suited for assessment of steady state. This paper presents a nonlinear mixed effects modelling approach for estimation of steady state attainment, based on fitting a simple nonlinear mixed model to observed trough plasma concentrations. The simple nonlinear mixed model is developed and proposed for use under certain pharmacokinetic assumptions. The nonlinear mixed modelling estimation approach is described and illustrated by application to trough data from a multiple dose trial in healthy subjects. The performance of the nonlinear mixed modelling approach is compared to ANOVA‐based approaches by means of simulation techniques. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
ABSTRACT

A statistical test can be seen as a procedure to produce a decision based on observed data, where some decisions consist of rejecting a hypothesis (yielding a significant result) and some do not, and where one controls the probability to make a wrong rejection at some prespecified significance level. Whereas traditional hypothesis testing involves only two possible decisions (to reject or not a null hypothesis), Kaiser’s directional two-sided test as well as the more recently introduced testing procedure of Jones and Tukey, each equivalent to running two one-sided tests, involve three possible decisions to infer the value of a unidimensional parameter. The latter procedure assumes that a point null hypothesis is impossible (e.g., that two treatments cannot have exactly the same effect), allowing a gain of statistical power. There are, however, situations where a point hypothesis is indeed plausible, for example, when considering hypotheses derived from Einstein’s theories. In this article, we introduce a five-decision rule testing procedure, equivalent to running a traditional two-sided test in addition to two one-sided tests, which combines the advantages of the testing procedures of Kaiser (no assumption on a point hypothesis being impossible) and Jones and Tukey (higher power), allowing for a nonnegligible (typically 20%) reduction of the sample size needed to reach a given statistical power to get a significant result, compared to the traditional approach.  相似文献   

19.
In this article, a simple algorithm is used to maximize a family of optimal statistics for hypothesis testing with a nuisance parameter not defined under the null hypothesis. This arises from genetic linkage and association studies and other hypothesis testing problems. The maximum of optimal statistics over the nuisance parameter space can be used as a robust test in this situation. Here, we use the maximum and minimum statistics to examine the sensitivity of testing results with respect to the unknown nuisance parameter. Examples from genetic linkage analysis using affected sub pairs and a candidate-gene association study in case-parents trio design are studied.  相似文献   

20.
This article considers a one-way random effects model for assessing the proportion of workers whose mean exposures exceed the occupational exposure limit based on exposure measurements from a random sample of workers. Hypothesis testing and interval estimation for the relevant parameter of interest are proposed when the exposure data are unbalanced. The methods are based on the generalized p-value approach, and simplify to the ones in Krishnamoorthy and Mathews (J. Agri. Biol. Environ. Statist. 7 (2002) 440) when the data are balanced. The sizes and powers of the test are evaluated numerically. The numerical studies show that the proposed inferential procedures are satisfactory even for small samples. The results are illustrated using practical examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号