首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 461 毫秒
1.
A multi‐sample test for equality of mean directions is developed for populations having Langevin‐von Mises‐Fisher distributions with a common unknown concentration. The proposed test statistic is a monotone transformation of the likelihood ratio. The high‐concentration asymptotic null distribution of the test statistic is derived. In contrast to previously suggested high‐concentration tests, the high‐concentration asymptotic approximation to the null distribution of the proposed test statistic is also valid for large sample sizes with any fixed nonzero concentration parameter. Simulations of size and power show that the proposed test outperforms competing tests. An example with three‐dimensional data from an anthropological study illustrates the practical application of the testing procedure.  相似文献   

2.
K correlated 2×2 tables with structural zero are commonly encountered in infectious disease studies. A hypothesis test for risk difference is considered in K independent 2×2 tables with structural zero in this paper. Score statistic, likelihood ratio statistic and Wald‐type statistic are proposed to test the hypothesis on the basis of stratified data and pooled data. Sample size formulae are derived for controlling a pre‐specified power or a pre‐determined confidence interval width. Our empirical results show that score statistic and likelihood ratio statistic behave better than Wald‐type statistic in terms of type I error rate and coverage probability, sample sizes based on stratified test are smaller than those based on the pooled test in the same design. A real example is used to illustrate the proposed methodologies. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
An empirical likelihood-based inferential procedure is developed for a class of general additive-multiplicative hazard models. The proposed log-empirical likelihood ratio test statistic for the parameter vector is shown to have a chi-squared limiting distribution. The result can be used to make inference about the entire parameter vector as well as any linear combination of it. The asymptotic power of the proposed test statistic under contiguous alternatives is discussed. The method is illustrated by extensive simulation studies and a real example.  相似文献   

4.
In this paper, we suggest a similar unit root test statistic for dynamic panel data with fixed effects. The test is based on the LM, or score, principle and is derived under the assumption that the time dimension of the panel is fixed, which is typical in many panel data studies. It is shown that the limiting distribution of the test statistic is standard normal. The similarity of the test with respect to both the initial conditions of the panel and the fixed effects is achieved by allowing for a trend in the model using a parameterisation that has the same interpretation under both the null and alternative hypotheses. This parameterisation can be expected to increase the power of the test statistic. Simulation evidence suggests that the proposed test has empirical size that is very close to the nominal level and considerably more power than other panel unit root tests that assume that the time dimension of the panel is large. As an application of the test, we re-examine the stationarity of real stock prices and dividends using disaggregated panel data over a relatively short period of time. Our results suggest that while real stock prices contain a unit root, real dividends are trend stationary.  相似文献   

5.
Bayesian synthetic likelihood (BSL) is now a well-established method for performing approximate Bayesian parameter estimation for simulation-based models that do not possess a tractable likelihood function. BSL approximates an intractable likelihood function of a carefully chosen summary statistic at a parameter value with a multivariate normal distribution. The mean and covariance matrix of this normal distribution are estimated from independent simulations of the model. Due to the parametric assumption implicit in BSL, it can be preferred to its nonparametric competitor, approximate Bayesian computation, in certain applications where a high-dimensional summary statistic is of interest. However, despite several successful applications of BSL, its widespread use in scientific fields may be hindered by the strong normality assumption. In this paper, we develop a semi-parametric approach to relax this assumption to an extent and maintain the computational advantages of BSL without any additional tuning. We test our new method, semiBSL, on several challenging examples involving simulated and real data and demonstrate that semiBSL can be significantly more robust than BSL and another approach in the literature.  相似文献   

6.
Likelihood‐based inference with missing data is challenging because the observed log likelihood is often an (intractable) integration over the missing data distribution, which also depends on the unknown parameter. Approximating the integral by Monte Carlo sampling does not necessarily lead to a valid likelihood over the entire parameter space because the Monte Carlo samples are generated from a distribution with a fixed parameter value. We consider approximating the observed log likelihood based on importance sampling. In the proposed method, the dependency of the integral on the parameter is properly reflected through fractional weights. We discuss constructing a confidence interval using the profile likelihood ratio test. A Newton–Raphson algorithm is employed to find the interval end points. Two limited simulation studies show the advantage of the Wilks inference over the Wald inference in terms of power, parameter space conformity and computational efficiency. A real data example on salamander mating shows that our method also works well with high‐dimensional missing data.  相似文献   

7.
Abstract. Many statistical models arising in applications contain non‐ and weakly‐identified parameters. Due to identifiability concerns, tests concerning the parameters of interest may not be able to use conventional theories and it may not be clear how to assess statistical significance. This paper extends the literature by developing a testing procedure that can be used to evaluate hypotheses under non‐ and weakly‐identifiable semiparametric models. The test statistic is constructed from a general estimating function of a finite dimensional parameter model representing the population characteristics of interest, but other characteristics which may be described by infinite dimensional parameters, and viewed as nuisance, are left completely unspecified. We derive the limiting distribution of this statistic and propose theoretically justified resampling approaches to approximate its asymptotic distribution. The methodology's practical utility is illustrated in simulations and an analysis of quality‐of‐life outcomes from a longitudinal study on breast cancer.  相似文献   

8.
In many case-control studies, it is common to utilize paired data when treatments are being evaluated. In this article, we propose and examine an efficient distribution-free test to compare two independent samples, where each is based on paired observations. We extend and modify the density-based empirical likelihood ratio test presented by Gurevich and Vexler [7] to formulate an appropriate parametric likelihood ratio test statistic corresponding to the hypothesis of our interest and then to approximate the test statistic nonparametrically. We conduct an extensive Monte Carlo study to evaluate the proposed test. The results of the performed simulation study demonstrate the robustness of the proposed test with respect to values of test parameters. Furthermore, an extensive power analysis via Monte Carlo simulations confirms that the proposed method outperforms the classical and general procedures in most cases related to a wide class of alternatives. An application to a real paired data study illustrates that the proposed test can be efficiently implemented in practice.  相似文献   

9.
Abstract. We propose a non‐parametric change‐point test for long‐range dependent data, which is based on the Wilcoxon two‐sample test. We derive the asymptotic distribution of the test statistic under the null hypothesis that no change occurred. In a simulation study, we compare the power of our test with the power of a test which is based on differences of means. The results of the simulation study show that in the case of Gaussian data, our test has only slightly smaller power minus.3pt than the ‘difference‐of‐means’ test. For heavy‐tailed data, our test outperforms the ‘difference‐of‐means’ test.  相似文献   

10.
The non-monotonic behaviour of the Wald test in some finite-sample applications leads to low power when the null hypothesis needs rejection most. This article proposes a simple check for discerning if the Wald statistic for testing significance of regression coefficients is non-monotonic in the neighbourhood of the parameter space from which the sample data are drawn. Monte Carlo simulations show that this method works rather well for detecting situations where the Wald test can be safely applied. An example is provided to illustrate the use of this check.  相似文献   

11.
We consider varying coefficient models, which are an extension of the classical linear regression models in the sense that the regression coefficients are replaced by functions in certain variables (for example, time), the covariates are also allowed to depend on other variables. Varying coefficient models are popular in longitudinal data and panel data studies, and have been applied in fields such as finance and health sciences. We consider longitudinal data and estimate the coefficient functions by the flexible B-spline technique. An important question in a varying coefficient model is whether an estimated coefficient function is statistically different from a constant (or zero). We develop testing procedures based on the estimated B-spline coefficients by making use of nice properties of a B-spline basis. Our method allows longitudinal data where repeated measurements for an individual can be correlated. We obtain the asymptotic null distribution of the test statistic. The power of the proposed testing procedures are illustrated on simulated data where we highlight the importance of including the correlation structure of the response variable and on real data.  相似文献   

12.
This paper is concerned with interval estimation for the breakpoint parameter in segmented regression. We present score‐type confidence intervals derived from the score statistic itself and from the recently proposed gradient statistic. Due to lack of regularity conditions of the score, non‐smoothness and non‐monotonicity, naive application of the score‐based statistics is unfeasible and we propose to exploit the smoothed score obtained via induced smoothing. We compare our proposals with the traditional methods based on the Wald and the likelihood ratio statistics via simulations and an analysis of a real dataset: results show that the smoothed score‐like statistics perform in practice somewhat better than competitors, even when the model is not correctly specified.  相似文献   

13.
谭祥勇等 《统计研究》2021,38(2):135-145
部分函数型线性变系数模型(PFLVCM)是近几年出现的一个比较灵活、应用广泛的新模型。在实际应用中,搜集到的经济和金融数据往往存在序列相关性。如果不考虑数据间的相关性直接对其进行建模,会影响模型中参数估计的精度和有效性。本文主要研究了PFLVCM中误差的序列相关性的检验问题,基于经验似然,把标量时间序列数据相关性检验的方法拓展到函数型数据中,提出了经验对数似然比检验统计量,并在零假设下得到了检验统计量的近似分布。通过蒙特卡洛数值模拟说明该统计量在有限样本下有良好的水平和功效。最后,把该方法用于检验美国商业用电消费数据是否有序列相关性,证明该统计量的有效性和实用性。  相似文献   

14.
There are several measures that are commonly used to assess performance of a multiple testing procedure (MTP). These measures include power, overall error rate (family‐wise error rate), and lack of power. In settings where the MTP is used to estimate a parameter, for example, the minimum effective dose, bias is of interest. In some studies, the parameter has a set‐like structure, and thus, bias is not well defined. Nevertheless, the accuracy of estimation is one of the essential features of an MTP in such a context. In this paper, we propose several measures based on the expected values of loss functions that resemble bias. These measures are constructed to be useful in combination drug dose response studies when the target is to identify all minimum efficacious drug combinations. One of the proposed measures allows for assigning different penalties for incorrectly overestimating and underestimating a true minimum efficacious combination. Several simple examples are considered to illustrate the proposed loss functions. Then, the expected values of these loss functions are used in a simulation study to identify the best procedure among several methods used to select the minimum efficacious combinations, where the measures take into account the investigator's preferences about possibly overestimating and/or underestimating a true minimum efficacious combination. The ideas presented in this paper can be generalized to construct measures that resemble bias in other settings. These measures can serve as an essential tool to assess performance of several methods for identifying set‐like parameters in terms of accuracy of estimation. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
The authors propose a new type of scan statistic to test for the presence of space‐time clusters in point processes data, when the goal is to identify and evaluate the statistical significance of localized clusters. Their method is based only on point patterns for cases; it does not require any specific knowledge of the underlying population. The authors propose to scan the three‐dimensional space with a score test statistic under the null hypothesis that the underlying point process is an inhomogeneous Poisson point process with space and time separable intensity. The alternative is that there are one or more localized space‐time clusters. Their method has been implemented in a computationally efficient way so that it can be applied routinely. They illustrate their method with space‐time crime data from Belo Horizonte, a Brazilian city, in addition to presenting a Monte Carlo study to analyze the power of their new test.  相似文献   

16.
This paper develops a test for comparing treatment effects when observations are missing at random for repeated measures data on independent subjects. It is assumed that missingness at any occasion follows a Bernoulli distribution. It is shown that the distribution of the vector of linear rank statistics depends on the unknown parameters of the probability law that governs missingness, which is absent in the existing conditional methods employing rank statistics. This dependence is through the variance–covariance matrix of the vector of linear ranks. The test statistic is a quadratic form in the linear rank statistics when the variance–covariance matrix is estimated. The limiting distribution of the test statistic is derived under the null hypothesis. Several methods of estimating the unknown components of the variance–covariance matrix are considered. The estimate that produces stable empirical Type I error rate while maintaining the highest power among the competing tests is recommended for implementation in practice. Simulation studies are also presented to show the advantage of the proposed test over other rank-based tests that do not account for the randomness in the missing data pattern. Our method is shown to have the highest power while also maintaining near-nominal Type I error rates. Our results clearly illustrate that even for an ignorable missingness mechanism, the randomness in the pattern of missingness cannot be ignored. A real data example is presented to highlight the effectiveness of the proposed method.  相似文献   

17.
Traditionally, noninferiority hypotheses have been tested using a frequentist method with a fixed margin. Given that information for the control group is often available from previous studies, it is interesting to consider a Bayesian approach in which information is “borrowed” for the control group to improve efficiency. However, construction of an appropriate informative prior can be challenging. In this paper, we consider a hybrid Bayesian approach for testing noninferiority hypotheses in studies with a binary endpoint. To account for heterogeneity between the historical information and the current trial for the control group, a dynamic P value–based power prior parameter is proposed to adjust the amount of information borrowed from the historical data. This approach extends the simple test‐then‐pool method to allow a continuous discounting power parameter. An adjusted α level is also proposed to better control the type I error. Simulations are conducted to investigate the performance of the proposed method and to make comparisons with other methods including test‐then‐pool and hierarchical modeling. The methods are illustrated with data from vaccine clinical trials.  相似文献   

18.
In this article, we revisit the importance of the generalized jackknife in the construction of reliable semi-parametric estimates of some parameters of extreme or even rare events. The generalized jackknife statistic is applied to a minimum-variance reduced-bias estimator of a positive extreme value index—a primary parameter in statistics of extremes. A couple of refinements are proposed and a simulation study shows that these are able to achieve a lower mean square error. A real data illustration is also provided.  相似文献   

19.
Demonstrated equivalence between a categorical regression model based on case‐control data and an I‐sample semiparametric selection bias model leads to a new goodness‐of‐fit test. The proposed test statistic is an extension of an existing Kolmogorov–Smirnov‐type statistic and is the weighted average of the absolute differences between two estimated distribution functions in each response category. The paper establishes an optimal property for the maximum semiparametric likelihood estimator of the parameters in the I‐sample semiparametric selection bias model. It also presents a bootstrap procedure, some simulation results and an analysis of two real datasets.  相似文献   

20.
In this paper, we propose and study a new global test, namely, GPF test, for the one‐way anova problem for functional data, obtained via globalizing the usual pointwise F‐test. The asymptotic random expressions of the test statistic are derived, and its asymptotic power is investigated. The GPF test is shown to be root‐n consistent. It is much less computationally intensive than a parametric bootstrap test proposed in the literature for the one‐way anova for functional data. Via some simulation studies, it is found that in terms of size‐controlling and power, the GPF test is comparable with two existing tests adopted for the one‐way anova problem for functional data. A real data example illustrates the GPF test.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号