首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Cancer immunotherapy often reflects the improvement in both short-term risk reduction and long-term survival. In this scenario, a mixture cure model can be used for the trial design. However, the hazard functions based on the mixture cure model between two groups will ultimately crossover. Thus, the conventional assumption of proportional hazards may be violated and study design using standard log-rank test (LRT) could lose power if the main interest is to detect the improvement of long-term survival. In this paper, we propose a change sign weighted LRT for the trial design. We derived a sample size formula for the weighted LRT, which can be used for designing cancer immunotherapy trials to detect both short-term risk reduction and long-term survival. Simulation studies are conducted to compare the efficiency between the standard LRT and the change sign weighted LRT.  相似文献   

2.
This paper introduces an alternating conditional expectation (ACE) algorithm: a non-parametric approach for estimating the transformations that lead to the maximal multiple correlation of a response and a set of independent variables in regression and correlation analysis. These transformations can give the data analyst insight into the relationships between these variables so that this can be best described and non-linear relationships uncovered. Using the Bayesian information criterion (BIC), we show how to find the best closed-form approximations for the optimal ACE transformations. By means of ACE and BIC, the model fit can be considerably improved compared with the conventional linear model as demonstrated in the two simulated and two real datasets in this paper.  相似文献   

3.
This paper shows how procedures for computing moments and cumulants may themselves be computed from a few elementary identities.Many parameters, such as variance, may be expressed or approximated as linear combinations of products of expectations. The estimates of such parameters may be expressed as the same linear combinations of products of averages. The moments and cumulants of such estimates may be computed in a straightforward way if the terms of the estimates, moments and cumulants are represented as lists and the expectation operation defined as a transformation of lists. Vector space considerations lead to a unique representation of terms and hence to a simplification of results. Basic identities relating variables and their expectations induce transformations of lists, which transformations may be computed from the identities. In this way procedures for complex calculations are computed from basic identities.The procedures permit the calculation of results which would otherwise involve complementary set partitions, k-statistics, and pattern functions. The examples include the calculation of unbiased estimates of cumulants, of cumulants of these, and of moments of bootstrap estimates.  相似文献   

4.
Generalized estimating equations (GEE) have become a popular method for marginal regression modelling of data that occur in clusters. Features of the GEE methodology are the use of a ‘working covariance’, an approximation to the underlying covariance, which is used to improve the efficiency in estimating the regression coefficients, and the ‘sandwich’ estimate of variance, which provides a way of consistently estimating their standard errors. These techniques have been extended to include estimating equations for the underlying correlation structure, both to improve the efficiency of the regression coefficient estimates and to provide estimates of correlations between units in a cluster, when these are of interest. If the mean structure is of primary interest, then a simpler set of equations (GEE1) can be used, whereas if the underlying covariance structure is of interest in its own right, the use of the more complex GEE2 estimating equations is often recommended. In this paper, we compare the effect of increasing the complexity of the ‘working covariances’ on the variance of the parameter estimates, as well as the mean-squared error of the ‘sandwich’ estimate of variance. We give asymptotic expressions for these variances and mean-squared error terms. We use these to study the behaviour of different variants of GEE1 and GEE2 when we change the number of clusters, the cluster size, and the within-cluster correlation. We conclude that the extra complexity of the full GEE2 approach is not usually justified if the mean structure is of primary interest.  相似文献   

5.
The paper reviews recent contributions to the statistical inference methods, tests and estimates, based on the generalized median of Oja. Multivariate analogues of sign and rank concepts, affine invariant one-sample and two-sample sign tests and rank tests, affine equivariant median and Hodges–Lehmann-type estimates are reviewed and discussed. Some comparisons are made to other generalizations. The theory is illustrated by two examples.  相似文献   

6.
The distributions of some transformations of the sample correlation coefficient r are studied here, when the parent population is a mixture of two standard bivariate normals. The behavior of these transformations is assessed through the first four standard moments. It is shown that there is a close relationship between the behavior of the transformed variables and the lack of normality as evinced by the 'kurtosis' defined in the bivariate population  相似文献   

7.
Reply     
ABSTRACT

In the class of stochastic volatility (SV) models, leverage effects are typically specified through the direct correlation between the innovations in both returns and volatility, resulting in the dynamic leverage (DL) model. Recently, two asymmetric SV models based on threshold effects have been proposed in the literature. As such models consider only the sign of the previous return and neglect its magnitude, this paper proposes a dynamic asymmetric leverage (DAL) model that accommodates the direct correlation as well as the sign and magnitude of the threshold effects. A special case of the DAL model with zero direct correlation between the innovations is the asymmetric leverage (AL) model. The dynamic asymmetric leverage models are estimated by the Monte Carlo likelihood (MCL) method. Monte Carlo experiments are presented to examine the finite sample properties of the estimator. For a sample size of T = 2000 with 500 replications, the sample means, standard deviations, and root mean squared errors of the MCL estimators indicate only a small finite sample bias. The empirical estimates for S&;P 500 and TOPIX financial returns, and USD/AUD and YEN/USD exchange rates, indicate that the DAL class, including the DL and AL models, is generally superior to threshold SV models with respect to AIC and BIC, with AL typically providing the best fit to the data.  相似文献   

8.
A note on the correlation structure of transformed Gaussian random fields   总被引:1,自引:0,他引:1  
Transformed Gaussian random fields can be used to model continuous time series and spatial data when the Gaussian assumption is not appropriate. The main features of these random fields are specified in a transformed scale, while for modelling and parameter interpretation it is useful to establish connections between these features and those of the random field in the original scale. This paper provides evidence that for many ‘normalizing’ transformations the correlation function of a transformed Gaussian random field is not very dependent on the transformation that is used. Hence many commonly used transformations of correlated data have little effect on the original correlation structure. The property is shown to hold for some kinds of transformed Gaussian random fields, and a statistical explanation based on the concept of parameter orthogonality is provided. The property is also illustrated using two spatial datasets and several ‘normalizing’ transformations. Some consequences of this property for modelling and inference are also discussed.  相似文献   

9.
It is shown that the non-null distribution of the multiple correlation coefficient may be derived rather easily if the correlated normal variables are defined in a convenient vay. The invariance of the correlation distribution to linear transformations of the variables makes the results generally applicable. The distribution is derived as the well-known mixture of null distributions, and some generalizations when the variables are not normally distributed are indicated.  相似文献   

10.
The Generalized Estimating Equation (GEE) method popularized by Liang and Zeger provides a very general method for fitting regression models to observations that occur in clusters. Features of the method are the specification of a 'working correlation' (a guess at the true correlation structure of the data) which is used to improve efficiency in estimating the regression coefficients, and the 'information sandwich' which provides a way of consistently estimating the standard errors of the estimated regression coefficients even if (as we might expect) the working correlation is wrong. This paper develops asymptotic expressions for the bias and efficiency both of the regression coefficient estimates and of the sandwich estimate, and uses them to study the behaviour of the estimates.
It looks at the effect of the choice of the working correlation on the estimate and also examines the effect of different cluster sizes and different degrees of correlation between the covariates. The performance of these methods is found to be excellent, particularly when the degree of correlation in the responses and covariates is small to moderate.  相似文献   

11.
Suppose estimates are available for correlations between pairs of variables but that the matrix of correlation estimates is not positive definite. In various applications, having a valid correlation matrix is important in connection with follow‐up analyses that might, for example, involve sampling from a valid distribution. We present new methods for adjusting the initial estimates to form a proper, that is, nonnegative definite, correlation matrix. These are based on constructing certain pseudo‐likelihood functions, formed by multiplying together exact or approximate likelihood contributions associated with the individual correlations. Such pseudo‐likelihoods may then be maximized over the range of proper correlation matrices. They may also be utilized to form pseudo‐posterior distributions for the unknown correlation matrix, by factoring in relevant prior information for the separate correlations. We illustrate our methods on two examples from a financial time series and genomic pathway analysis.  相似文献   

12.
In multi-category response models, categories are often ordered. In the case of ordinal response models, the usual likelihood approach becomes unstable with ill-conditioned predictor space or when the number of parameters to be estimated is large relative to the sample size. The likelihood estimates do not exist when the number of observations is less than the number of parameters. The same problem arises if constraint on the order of intercept values is not met during the iterative procedure. Proportional odds models (POMs) are most commonly used for ordinal responses. In this paper, penalized likelihood with quadratic penalty is used to address these issues with a special focus on POMs. To avoid large differences between two parameter values corresponding to the consecutive categories of an ordinal predictor, the differences between the parameters of two adjacent categories should be penalized. The considered penalized-likelihood function penalizes the parameter estimates or differences between the parameter estimates according to the type of predictors. Mean-squared error for parameter estimates, deviance of fitted probabilities and prediction error for ridge regression are compared with usual likelihood estimates in a simulation study and an application.  相似文献   

13.
This article shows how to use any correlation coefficient to produce an estimate of location and scale. It is part of a broader system, called a correlation estimation system (CES), that uses correlation coefficients as the starting point for estimations. The method is illustrated using the well-known normal distribution. This article shows that any correlation coefficient can be used to fit a simple linear regression line to bivariate data and then the slope and intercept are estimates of standard deviation and location. Because a robust correlation will produce robust estimates, this CES can be recommended as a tool for everyday data analysis. Simulations indicate that the median with this method using a robust correlation coefficient appears to be nearly as efficient as the mean with good data and much better if there are a few errant data points. Hypothesis testing and confidence intervals are discussed for the scale parameter; both normal and Cauchy distributions are covered.  相似文献   

14.
Variance estimation of changes requires estimates of variances and covariances that would be relatively straightforward to make if the sample remained the same from one wave to the next, but this is rarely the case in practice as successive waves are usually different overlapping samples. The author proposes a design‐based estimator for covariance matrices that is adapted to this situation. Under certain conditions, he shows that his approach yields non‐negative definite estimates for covariance matrices and therefore positive variance estimates for a large class of measures of change.  相似文献   

15.
When modelling a finite population it is sometimes assumed that the residuals from the regression model expectations are distributed with a uniform non-zero intra-class correlation. It is shown that if a certain vector is spanned by the columns of the design matrix (in the homoskedastic case this vector corresponds to the inclusion of a constant term) then such a model is underidentified and the assumption of a known non-zero correlation has almost no impact on the results of the regression analysis. When this vector is not spanned by the columns of the design matrix, a simpler alternative model can usually be fitted equally well to observations from any single population. The only exception occurs when the the intra-class correlation required is negative in sign.  相似文献   

16.
Longitudinal studies suffer from patient dropout. The dropout process may be informative if there exists an association between dropout patterns and the rate of change in the response over time. Multiple patterns are plausible in that different causes of dropout might contribute to different patterns. These multiple patterns can be dichotomized into two groups: quantitative and qualitative interaction. Quantitative interaction indicates that each of the multiple sources is biasing the estimate of the rate of change in the same direction, although with differing magnitudes. Alternatively, qualitative interaction results in the multiple sources biasing the estimate of the rate of change in opposing directions. Qualitative interaction is of special concern, since it is less likely to be detected by conventional methods and can lead to highly misleading slope estimates. We explore a test for qualitative interaction based on simultaneous confidence intervals. The test accommodates the realistic situation where reasons for dropout are not fully understood, or even entirely unknown. It allows for an additional level of clustering among participating subjects. We apply these methods to a study exploring tumor growth rates in mice as well as a longitudinal study exploring rates of change in cognitive functioning for Alzheimer's patients.  相似文献   

17.
Monotonic transformations of explanatory continuous variables are often used to improve the fit of the logistic regression model to the data. However, no analytic studies have been done to study the impact of such transformations. In this paper, we study invariant properties of the logistic regression model under monotonic transformations. We prove that the maximum likelihood estimates, information value, mutual information, Kolmogorov–Smirnov (KS) statistics, and lift table are all invariant under certain monotonic transformations.  相似文献   

18.
Abstract.  We propose covariate adjusted correlation (Cadcor) analysis to target the correlation between two hidden variables that are observed after being multiplied by an unknown function of a common observable confounding variable. The distorting effects of this confounding may alter the correlation relation between the hidden variables. Covariate adjusted correlation analysis enables consistent estimation of this correlation, by targeting the definition of correlation through the slopes of the regressions of the hidden variables on each other and by establishing a connection to varying-coefficient regression. The asymptotic distribution of the resulting adjusted correlation estimate is established. These distribution results, when combined with proposed consistent estimates of the asymptotic variance, lead to the construction of approximate confidence intervals and inference for adjusted correlations. We illustrate our approach through an application to the Boston house price data. Finite sample properties of the proposed procedures are investigated through a simulation study.  相似文献   

19.
This article introduces a novel non parametric penalized likelihood hazard estimation when the censoring time is dependent on the failure time for each subject under observation. More specifically, we model this dependence using a copula, and the method of maximum penalized likelihood (MPL) is adopted to estimate the hazard function. We do not consider covariates in this article. The non negatively constrained MPL hazard estimation is obtained using a multiplicative iterative algorithm. The consistency results and the asymptotic properties of the proposed hazard estimator are derived. The simulation studies show that our MPL estimator under dependent censoring with an assumed copula model provides a better accuracy than the MPL estimator under independent censoring if the sign of dependence is correctly specified in the copula function. The proposed method is applied to a real dataset, with a sensitivity analysis performed over various values of correlation between failure and censoring times.  相似文献   

20.
The simulation-extrapolation (SIMEX) approach of Cook and Stefanski (J. Am. Stat. Assoc. 89:1314–1328, 1994) has proved to be successful in obtaining reliable estimates if variables are measured with (additive) errors. In particular for nonlinear models, this approach has advantages compared to other procedures such as the instrumental variable approach if only variables measured with error are available. However, it has always been assumed that measurement errors for the dependent variable are not correlated with those related to the explanatory variables although such scenario is quite likely. In such a case the (standard) SIMEX suffers from misspecification even for the simple linear regression model. Our paper reports first results from a generalized SIMEX (GSIMEX) approach which takes account of this correlation. We also demonstrate in our simulation study that neglect of the correlation will lead to estimates which may be worse than those from the naive estimator which completely disregards measurement errors.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号