首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 0 毫秒
1.
A self-validating numerical method based on interval analysis for the computation of central and non-central F probabilities and percentiles is reported. The major advantage of this approach is that there are guaranteed error bounds associated with the computed values (or intervals), i.e. the computed values satisfy the user-specified accuracy requirements. The methodology reported in this paper can be adapted to approximate the probabilities and percentiles for other commonly used distribution functions.  相似文献   

2.
The influence function of the covariance matrix is decomposed into a finite number of components. This decomposition provides a useful tool to develop efficient methods for computing empirical influence curves related to various multivariate methods. It can also be used to characterize multivariate methods from the sensitivity perspective. A numerical example is given to demonstrate efficient computing and to characterize some procedures of exploratory factor analysis.  相似文献   

3.
4.
ABSTRACT

This work presents advanced computational aspects of a new method for changepoint detection on spatio-temporal point process data. We summarize the methodology, based on building a Bayesian hierarchical model for the data and declaring prior conjectures on the number and positions of the changepoints, and show how to take decisions regarding the acceptance of potential changepoints. The focus of this work is about choosing an approach that detects the correct changepoint and delivers smooth reliable estimates in a feasible computational time; we propose Bayesian P-splines as a suitable tool for managing spatial variation, both under a computational and a model fitting performance perspective. The main computational challenges are outlined and a solution involving parallel computing in R is proposed and tested on a simulation study. An application is also presented on a data set of seismic events in Italy over the last 20 years.  相似文献   

5.
In the analysis of variance, we often encounter situations in which we want to test the null hypothesis of homogeneity of the normal means against various partially ordered alternative hypotheses. We study likelihood ratio tests for three useful types of alternatives: d-star, bipartite and broom tree. Especially, we give computational formulas for the level probabilities of the alternative types. The results permit us to obtain critical values for practical use.  相似文献   

6.
In clinical studies, the researchers measure the patients' response longitudinally. In recent studies, Mixed models are used to determine effects in the individual level. In the other hand, Henderson et al. [3,4] developed a joint likelihood function which combines likelihood functions of longitudinal biomarkers and survival times. They put random effects in the longitudinal component to determine if a longitudinal biomarker is associated with time to an event. In this paper, we deal with a longitudinal biomarker as a growth curve and extend Henderson's method to determine if a longitudinal biomarker is associated with time to an event for the multivariate survival data.  相似文献   

7.
Several mathematical programming approaches to the classification problem in discriminant analysis have recently been introduced. This paper empirically compares these newly introduced classification techniques with Fisher's linear discriminant analysis (FLDA), quadratic discriminant analysis (QDA), logit analysis, and several rank-based procedures for a variety of symmetric and skewed distributions. The percent of correctly classified observations by each procedure in a holdout sample indicate that while under some experimental conditions the linear programming approaches compete well with the classical procedures, overall, however, their performance lags behind that of the classical procedures.  相似文献   

8.
Tanaka (1988) lias derived the influence functions, which are equivalent to the perturbation expansions up to linear terms, of two functions of eigenvalues and eigenvectors of a real symmetric matrix, and applied them to principal component analysis. The present paper deals with the perturbation expansions up to quadratic terms of the same functions and discusses their application to sensitivity analysis in multivariate methods, in particular, principal component analysis and principal factor analysis. Numerical examples are given to show how the approximation improves with the quadratic terms.  相似文献   

9.
Two seemingly different approaches to simplicity in the analysis of connected block designs, and their relationship to the concepts of balance are discussed.  相似文献   

10.
Confidence interval (CI) for a standard deviation in a normal distribution, based on pivotal quantity with a Chi-square distribution, is considered. As a measure of CI quality, the ratio of its endpoints is taken. There are given formulas for sample sizes so that this ratio does not exceed a fixed value. Both equally tailed and minimum ratio of endpoint CIs are considered.  相似文献   

11.
Papers dealing with measures of predictive power in survival analysis have seen their independence of censoring, or their estimates being unbiased under censoring, as the most important property. We argue that this property has been wrongly understood. Discussing the so-called measure of information gain, we point out that we cannot have unbiased estimates if all values, greater than a given time τ, are censored. This is due to the fact that censoring before τ has a different effect than censoring after τ. Such τ is often introduced by design of a study. Independence can only be achieved under the assumption of the model being valid after τ, which is impossible to verify. But if one is willing to make such an assumption, we suggest using multiple imputation to obtain a consistent estimate. We further show that censoring has different effects on the estimation of the measure for the Cox model than for parametric models, and we discuss them separately. We also give some warnings about the usage of the measure, especially when it comes to comparing essentially different models.  相似文献   

12.
13.
To accelerate the drug development process and shorten approval time, the design of multiregional clinical trials (MRCTs) incorporates subjects from many countries/regions around the world under the same protocol. After showing the overall efficacy of a drug in all global regions, one can also simultaneously evaluate the possibility of applying the overall trial results to all regions and subsequently support drug registration in each of them. In this paper, we focus on a specific region and establish a statistical criterion to assess the consistency between the specific region and overall results in an MRCT. More specifically, we treat each region in an MRCT as an independent clinical trial, and each perhaps has different treatment effect. We then construct the empirical prior information for the treatment effect for the specific region on the basis of all of the observed data from other regions. We will conclude similarity between the specific region and all regions if the posterior probability of deriving a positive treatment effect in the specific region is large, say 80%. Numerical examples illustrate applications of the proposed approach in different scenarios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
ABSTRACT

This article presents a reliable method for highlighting a defective stage within a manufacturing process when the existence of a failure is only known at the end of the process. It was developed in the context of integrated circuit manufacturing, where low costs and high yields are indispensable if the manufacturer is to be competitive. Change detection methods were used to point out the defective stage. Two methods were compared and the best chosen. Thanks to this approach, it was possible to solve some yield problems for which the engineers' investigations were far from the real cause of failure. However, there is a strong requirement to assess the reliability of the suspicions cast on the incriminated stage, otherwise engineers could be made to do useless work and time could be wasted looking into events that are not the true cause of failure. Two complementary tools were implemented for this reliability assessment and their efficiency is illustrated by several examples.  相似文献   

15.
Summary. In the analysis of medical survival data, semiparametric proportional hazards models are widely used. When the proportional hazards assumption is not tenable, these models will not be suitable. Other models for covariate effects can be useful. In particular, we consider accelerated life models, in which the effect of covariates is to scale the quantiles of the base-line distribution. Solomon and Hutton have suggested that there is some robustness to misspecification of survival regression models. They showed that the relative importance of covariates is preserved under misspecification with assumptions of small coefficients and orthogonal transformation of covariates. We elucidate these results by applications to data from five trials which compare two common anti-epileptic drugs (carbamazepine versus sodium valporate monotherapy for epilepsy) and to survival of a cohort of people with cerebral palsy. Results on the robustness against model misspecification depend on the assumptions of small coefficients and on the underlying distribution of the data. These results hold in cerebral palsy but do not hold in epilepsy data which have early high hazard rates. The orthogonality of coefficients is not important. However, the choice of model is important for an estimation of the magnitude of effects, particularly if the base-line shape parameter indicates high initial hazard rates.  相似文献   

16.
If interest lies in reporting absolute measures of risk from time-to-event data then obtaining an appropriate approximation to the shape of the underlying hazard function is vital. It has previously been shown that restricted cubic splines can be used to approximate complex hazard functions in the context of time-to-event data. The degree of complexity for the spline functions is dictated by the number of knots that are defined. We highlight through the use of a motivating example that complex hazard function shapes are often required when analysing time-to-event data. Through the use of simulation, we show that provided a sufficient number of knots are used, the approximated hazard functions given by restricted cubic splines fit closely to the true function for a range of complex hazard shapes. The simulation results also highlight the insensitivity of the estimated relative effects (hazard ratios) to the correct specification of the baseline hazard.  相似文献   

17.
18.
Summary.  The main advantage of longitudinal studies is that they can distinguish changes over time within individuals (longitudinal effects) from differences between subjects at the start of the study (base-line characteristics; cross-sectional effects). Often, especially in observational studies, subjects are very heterogeneous at base-line, and one may want to correct for this, when doing inferences for the longitudinal trends. Three procedures for base-line correction are compared in the context of linear mixed models for continuous longitudinal data. All procedures are illustrated extensively by using data from an experiment which aimed at studying the relationship between the post-operative evolution of the functional status of elderly hip fracture patients and their preoperative neurocognitive status.  相似文献   

19.
For a trial with primary endpoint overall survival for a molecule with curative potential, statistical methods that rely on the proportional hazards assumption may underestimate the power and the time to final analysis. We show how a cure proportion model can be used to get the necessary number of events and appropriate timing via simulation. If phase 1 results for the new drug are exceptional and/or the medical need in the target population is high, a phase 3 trial might be initiated after phase 1. Building in a futility interim analysis into such a pivotal trial may mitigate the uncertainty of moving directly to phase 3. However, if cure is possible, overall survival might not be mature enough at the interim to support a futility decision. We propose to base this decision on an intermediate endpoint that is sufficiently associated with survival. Planning for such an interim can be interpreted as making a randomized phase 2 trial a part of the pivotal trial: If stopped at the interim, the trial data would be analyzed, and a decision on a subsequent phase 3 trial would be made. If the trial continues at the interim, then the phase 3 trial is already underway. To select a futility boundary, a mechanistic simulation model that connects the intermediate endpoint and survival is proposed. We illustrate how this approach was used to design a pivotal randomized trial in acute myeloid leukemia and discuss historical data that informed the simulation model and operational challenges when implementing it.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号