首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
An evaluation of FBST, Fully Bayesian Significance Test, restricted to survival models is the main objective of the present paper. A Survival distribution should be chosen among the tree celebrated ones, lognormal, gamma, and Weibull. For this discrimination, a linear mixture of the three distributions is an important tool: the FBST is used to test the hypotheses defined on the mixture weights space. Another feature of the paper is that all three distributions are reparametrized in that all the six parameters are written as functions of the mean and the variance of the population been studied. Some numerical results from simulations with some right-censored data are considered.  相似文献   

2.
The stratified Cox model is commonly used for stratified clinical trials with time‐to‐event endpoints. The estimated log hazard ratio is approximately a weighted average of corresponding stratum‐specific Cox model estimates using inverse‐variance weights; the latter are optimal only under the (often implausible) assumption of a constant hazard ratio across strata. Focusing on trials with limited sample sizes (50‐200 subjects per treatment), we propose an alternative approach in which stratum‐specific estimates are obtained using a refined generalized logrank (RGLR) approach and then combined using either sample size or minimum risk weights for overall inference. Our proposal extends the work of Mehrotra et al, to incorporate the RGLR statistic, which outperforms the Cox model in the setting of proportional hazards and small samples. This work also entails development of a remarkably accurate plug‐in formula for the variance of RGLR‐based estimated log hazard ratios. We demonstrate using simulations that our proposed two‐step RGLR analysis delivers notably better results through smaller estimation bias and mean squared error and larger power than the stratified Cox model analysis when there is a treatment‐by‐stratum interaction, with similar performance when there is no interaction. Additionally, our method controls the type I error rate while the stratified Cox model does not in small samples. We illustrate our method using data from a clinical trial comparing two treatments for colon cancer.  相似文献   

3.
A general family of univariate distributions generated by beta random variables, proposed by Jones, has been discussed recently in the literature. This family of distributions possesses great flexibility while fitting symmetric as well as skewed models with varying tail weights. In a similar vein, we define here a family of univariate distributions generated by Stacy’s generalized gamma variables. For these two families of univariate distributions, we discuss maximum entropy characterizations under suitable constraints. Based on these characterizations, an expected ratio of quantile densities is proposed for the discrimination of members of these two broad families of distributions. Several special cases of these results are then highlighted. An alternative to the usual method of moments is also proposed for the estimation of the parameters, and the form of these estimators is particularly amenable to these two families of distributions.  相似文献   

4.
The full Bayesian significance test (FBST) was introduced by Pereira and Stern for measuring the evidence of a precise null hypothesis. The FBST requires both numerical optimization and multidimensional integration, whose computational cost may be heavy when testing a precise null hypothesis on a scalar parameter of interest in the presence of a large number of nuisance parameters. In this paper we propose a higher order approximation of the measure of evidence for the FBST, based on tail area expansions of the marginal posterior of the parameter of interest. When in particular focus is on matching priors, further results are highlighted. Numerical illustrations are discussed.  相似文献   

5.
Abstract

We use chi-squared and related pivot variables to induce probability measures for model parameters, obtaining some results that will be useful on the induced densities. As illustration we considered mixed models with balanced cross nesting and used the algebraic structure to derive confidence intervals for the variance components. A numerical application is presented.  相似文献   

6.
We examine the asymptotic and small sample properties of model-based and robust tests of the null hypothesis of no randomized treatment effect based on the partial likelihood arising from an arbitrarily misspecified Cox proportional hazards model. When the distribution of the censoring variable is either conditionally independent of the treatment group given covariates or conditionally independent of covariates given the treatment group, the numerators of the partial likelihood treatment score and Wald tests have asymptotic mean equal to 0 under the null hypothesis, regardless of whether or how the Cox model is misspecified. We show that the model-based variance estimators used in the calculation of the model-based tests are not, in general, consistent under model misspecification, yet using analytic considerations and simulations we show that their true sizes can be as close to the nominal value as tests calculated with robust variance estimators. As a special case, we show that the model-based log-rank test is asymptotically valid. When the Cox model is misspecified and the distribution of censoring depends on both treatment group and covariates, the asymptotic distributions of the resulting partial likelihood treatment score statistic and maximum partial likelihood estimator do not, in general, have a zero mean under the null hypothesis. Here neither the fully model-based tests, including the log-rank test, nor the robust tests will be asymptotically valid, and we show through simulations that the distortion to test size can be substantial.  相似文献   

7.
For the linear regression model y=Xβ+e with severe multicollinearity, we put forward three shrinkage-type estimators based on the ordinary least-squares estimator including two types of independent factor estimators and a seemingly convex combination. The simulation study shows that the new estimators are not good enough when multicollinearity is mild to moderate, but perform very well when multicollinearity is severe to very severe.  相似文献   

8.

Consider the logistic linear model, with some explanatory variables overlooked. Those explanatory variables may be quantitative or qualitative. In either case, the resulting true response variable is not a binomial or a beta-binomial but a sum of binomials. Hence, standard computer packages for logistic regression can be inappropriate even if an overdispersion factor is incorporated. Therefore, a discrete exponential family assumption is considered to broaden the class of sampling models. Likelihood and Bayesian analyses are discussed. Bayesian computation techniques such as Laplacian approximations and Markov chain simulations are used to compute posterior densities and moments. Approximate conditional distributions are derived and are shown to be accurate. The Markov chain simulations are performed effectively to calculate posterior moments by using the approximate conditional distributions. The methodology is applied to Keeler's hardness of winter wheat data for checking binomial assumptions and to Matsumura's Accounting exams data for detailed likelihood and Bayesian analyses.  相似文献   

9.
A well-known problem is that ordinary least squares estimation of the parameters in the usual linear model can be highly ineficient when the error term has a heavy-tailed distribution. Inefficiency is also associated with situations where the error term is heteroscedastic, and standard confidence intervals can have probability coverage substantially different from the nominal level. This paper compares the small-sample efficiency of six methods that address this problem, three of which model the variance heterogeneity nonparametrically. Three methods were found to be relatively ineffective, but the other three perform relatively well. One of the six (M-regression with a Huber φ function and Schweppe weights) was found to have the highest efficiency for most of the situations considered in the simulations, but there might be situations where one of two other methods gives better results. One of these is a new method that uses a running interval smoother to estimate the optimal weights in weighted least squares, and the other is a method recently proposed by Cohen, Dalal, and Tukey. Computing a confidence interval for the slope using a bootstrap technique is also considered.  相似文献   

10.
This paper investigates several techniques to discriminate two multivariate stationary signals. The methods considered include Gaussian likelihood ratio tests for variance equality, a chi-squared time-domain test, and a spectral-based test. The latter two tests assess equality of the multivariate autocovariance function of the two signals over many different lags. The Gaussian likelihood ratio test is perhaps best viewed as principal component analyses (PCA) without dimension reduction aspects; it can be modified to consider covariance features other than variances via dimension augmentation tactics. A simulation study is constructed that shows how one can make inappropriate conclusions with PCA tests, even when dimension augmentation techniques are used to incorporate non-zero lag autocovariances into the analysis. The various discrimination methods are first discussed. A simulation study then illuminates the various properties of the methods. In this pursuit, calculations are needed to identify several multivariate time series models with specific autocovariance properties. To demonstrate the applicability of the methods, nine US and Canadian weather stations from three distinct regions are clustered. Here, the spectral clustering perfectly identified distinct regions, the chi-squared test performed marginally, and the PCA/likelihood ratio method did not perform well.  相似文献   

11.
ABSTRACT

This paper introduces an extension of the Markov switching GARCH model where the volatility in each state is a convex combination of two different GARCH components with time varying weights. This model has the dynamic behavior to capture the variants of shocks. The asymptotic behavior of the second moment is investigated and an appropriate upper bound for it is evaluated. Using the Bayesian method via Gibbs sampling algorithm, a dynamic method for the estimation of the parameters is proposed. Finally, we illustrate the efficiency of the model by simulation and also by considering two different set of empirical financial data. We show that this model provides much better forecasts of the volatility than the Markov switching GARCH model.  相似文献   

12.
The balanced half-sample technique has been used for estimating variances in large scale sample surveys. This paper considers the bias and variability of two balanced half-sample variance estimators when unique statistical weights are assigned to the sample individuals. Two weighting schemes are considered. In the first, the statistical weights based on the entire sample are used for each of the individual half-samples while in the second, the weights are adjusted for each individual half-sample.Sampling experiments using computer generated data from populations with specified values for the strata parameters were performed. Their results indicate that the variance estimators based on the second method are subject to much less bias and variability than those based on the first.  相似文献   

13.
ABSTRACT

Cox proportional hazards regression model has been widely used to estimate the effect of a prognostic factor on a time-to-event outcome. In a survey of survival analyses in cancer journals, it was found that only 5% of studies using Cox proportional hazards model attempted to verify the underlying assumption. Usually an estimate of the treatment effect from fitting a Cox model was reported without validation of the proportionality assumption. It is not clear how such an estimate should be interpreted if the proportionality assumption is violated. In this article, we show that the estimate of treatment effect from a Cox regression model can be interpreted as a weighted average of the log-scaled hazard ratio over the duration of study. A hypothetic example is used to explain the weights.  相似文献   

14.
The situation of identical distributions of a single m-generalized order statistic and a random convex combination of two neighboring m-generalized order statistics with beta-distributed weights is studied, and related characterizations of generalized Pareto distributions are shown.  相似文献   

15.
A class of distribution-free tests is proposed for the independence of two subsets of response coordinates. The tests are based on the pairwise distances across subjects within each subset of the response. A complete graph is induced by each subset of response coordinates, with the sample points as nodes and the pairwise distances as the edge weights. The proposed test statistic depends only on the rank order of edges in these complete graphs. The response vector may be of any dimensions. In particular, the number of samples may be smaller than the dimensions of the response. The test statistic is shown to have a normal limiting distribution with known expectation and variance under the null hypothesis of independence. The exact distribution free null distribution of the test statistic is given for a sample of size 14, and its Monte-Carlo approximation is considered for larger sample sizes. We demonstrate in simulations that this new class of tests has good power properties for very general alternatives.  相似文献   

16.
When combining estimates of a common parameter (of dimension d?1d?1) from independent data sets—as in stratified analyses and meta analyses—a weighted average, with weights ‘proportional’ to inverse variance matrices, is shown to have a minimal variance matrix (a standard fact when d=1d=1)—minimal in the sense that all convex combinations of the coordinates of the combined estimate have minimal variances. Minimum variance for the estimation of a single coordinate of the parameter can therefore be achieved by joint estimation of all coordinates using matrix weights. Moreover, if each estimate is asymptotically efficient within its own data set, then this optimally weighted average, with consistently estimated weights, is shown to be asymptotically efficient in the combined data set and avoids the need to merge the data sets and estimate the parameter in question afresh. This is so whatever additional non-common nuisance parameters may be in the models for the various data sets. A special case of this appeared in Fisher [1925. Theory of statistical estimation. Proc. Cambridge Philos. Soc. 22, 700–725.]: Optimal weights are ‘proportional’ to information matrices, and he argued that sample information should be used as weights rather than expected information, to maintain second-order efficiency of maximum likelihood. A number of special cases have appeared in the literature; we review several of them and give additional special cases, including stratified regression analysis—proportional-hazards, logistic or linear—, combination of independent ROC curves, and meta analysis. A test for homogeneity of the parameter across the data sets is also given.  相似文献   

17.
Nonparametric regression models are often used to check or suggest a parametric model. Several methods have been proposed to test the hypothesis of a parametric regression function against an alternative smoothing spline model. Some tests such as the locally most powerful (LMP) test by Cox et al. (Cox, D., Koh, E., Wahba, G. and Yandell, B. (1988). Testing the (parametric) null model hypothesis in (semiparametric) partial and generalized spline models. Ann. Stat., 16, 113–119.), the generalized maximum likelihood (GML) ratio test and the generalized cross validation (GCV) test by Wahba (Wahba, G. (1990). Spline models for observational data. CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM.) were developed from the corresponding Bayesian models. Their frequentist properties have not been studied. We conduct simulations to evaluate and compare finite sample performances. Simulation results show that the performances of these tests depend on the shape of the true function. The LMP and GML tests are more powerful for low frequency functions while the GCV test is more powerful for high frequency functions. For all test statistics, distributions under the null hypothesis are complicated. Computationally intensive Monte Carlo methods can be used to calculate null distributions. We also propose approximations to these null distributions and evaluate their performances by simulations.  相似文献   

18.

To test the equality of the covariance matrices of two dependent bivariate normals, we derive five combination tests using the Simes method. We compare the performance of these tests using simulation to each other and to the competing tests. In particular, simulations show that one of the combination tests has the best performance in terms of controlling the type I error rate even for small samples with similar power compared to other tests. We also apply the recommended test to real data from a crossover bioavailability study.  相似文献   

19.
Maximum likelihood (ML) estimation of relative risks via log-binomial regression requires a restricted parameter space. Computation via non linear programming is simple to implement and has high convergence rate. We show that the optimization problem is well posed (convex domain and convex objective) and provide a variance formula along with a methodology for obtaining standard errors and prediction intervals which account for estimates on the boundary of the parameter space. We performed simulations under several scenarios already used in the literature in order to assess the performance of ML and of two other common estimation methods.  相似文献   

20.
Abstract

In this article, we introduce three new classes of multivariate risk statistics, which can be considered as data-based versions of multivariate risk measures. These new classes are multivariate convex risk statistics, multivariate comonotonic convex risk statistics and multivariate empirical-law-invariant convex risk statistics, respectively. Representation results are provided. The arguments of proofs are mainly developed by ourselves. It turns out that all the relevant existing results in the literature are special cases of those obtained in this article.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号