首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 359 毫秒
1.
Non-mixture cure models (NMCMs) are derived from a simplified representation of the biological process that takes place after treatment for cancer. These models are intended to represent the time from the end of treatment to the time of first recurrence of cancer in studies when a proportion of those treated are completely cured. However, for many studies overall survival is also of interest. A two-stage NMCM that estimates the overall survival from a combination of two cure models, one from end of treatment to first recurrence and one from first recurrence to death, is proposed. The model is applied to two studies of Ewing's tumor in young patients. Caution needs to be exercised when extrapolating from cure models fitted to short follow-up times, but these data and associated simulations show how, when follow-up is limited, a two-stage model can give more stable estimates of the cure fraction than a one-stage model applied directly to overall survival.  相似文献   

2.
Low income proportion is an important index in comparisons of poverty in countries around the world. The stability of a society depends heavily on this index. An accurate and reliable estimation of this index plays an important role for government's economic policies. In this paper, the authors study empirical likelihood‐based inferences for a low income proportion under the simple random sampling and stratified random sampling designs. It is shown that the limiting distributions of the empirical likelihood ratios for the low income proportion are the scaled chi‐square distributions. The authors propose various empirical likelihood‐based confidence intervals for the low income proportion. Extensive simulation studies are conducted to evaluate the relative performance of the normal approximation‐based interval, bootstrap‐based intervals, and the empirical likelihood‐based intervals. The proposed methods are also applied to analyzing a real economic survey income dataset. The Canadian Journal of Statistics 39: 1–16; 2011 ©2011 Statistical Society of Canada  相似文献   

3.
Satellite remote-sensing is used to collect important atmospheric and geophysical data at various spatial resolutions, providing insight into spatiotemporal surface and climate variability globally. These observations are often plagued with missing spatial and temporal information of Earth''s surface due to (1) cloud cover at the time of a satellite passing and (2) infrequent passing of polar-orbiting satellites. While many methods are available to model missing data in space and time, in the case of land surface temperature (LST) from thermal infrared remote sensing, these approaches generally ignore the temporal pattern called the ‘diurnal cycle’ which physically constrains temperatures to peak in the early afternoon and reach a minimum at sunrise. In order to infill an LST dataset, we parameterize the diurnal cycle into a functional form with unknown spatiotemporal parameters. Using multiresolution spatial basis functions, we estimate these parameters from sparse satellite observations to reconstruct an LST field with continuous spatial and temporal distributions. These estimations may then be used to better inform scientists of spatiotemporal thermal patterns over relatively complex domains. The methodology is demonstrated using data collected by MODIS on NASA''s Aqua and Terra satellites over both Houston, TX and Phoenix, AZ USA.  相似文献   

4.
The distribution of the sample correlation coefficient is derived when the population is a mixture of two bivariate normal distributions with zero mean but different covariances and mixing proportions 1 - λ and λ respectively; λ will be called the proportion of contamination. The test of ρ = 0 based on Student's t, Fisher's z, arcsine, or Ruben's transformation is shown numerically to be nonrobust when λ, the proportion of contamination, lies between 0.05 and 0.50 and the contaminated population has 9 times the variance of the standard (bivariate normal) population. These tests are also sensitive to the presence of outliers.  相似文献   

5.
Two aspects of Taguchi's methods for analyzing parameter design experiments that can be improved upon are considered. It is shown how using interaction graphs instead of marginal graphs, and how using the sample variance instead of a signal-to-noise ratio, can lead to product designs that are more robust to variation. The advantages of the alternative analysis will be illustrated by reanalyzing a case study considered by Barker (1986).  相似文献   

6.
We propose a new cure model for survival data with a surviving or cure fraction. The new model is a mixture cure model where the covariate effects on the proportion of cure and the distribution of the failure time of uncured patients are separately modeled. Unlike the existing mixture cure models, the new model allows covariate effects on the failure time distribution of uncured patients to be negligible at time zero and to increase as time goes by. Such a model is particularly useful in some cancer treatments when the treat effect increases gradually from zero, and the existing models usually cannot handle this situation properly. We develop a rank based semiparametric estimation method to obtain the maximum likelihood estimates of the parameters in the model. We compare it with existing models and methods via a simulation study, and apply the model to a breast cancer data set. The numerical studies show that the new model provides a useful addition to the cure model literature.  相似文献   

7.
Kayo Denda 《Serials Review》2013,39(4):261-266
Abstract

The Women's & Gender Studies Journal Database (WGSJD) at Rutgers University is a user-oriented, discipline-based database that provides quick access to journal titles available online. It is also a union list of journal titles, print and online, in the area of women's and gender studies at Rutgers University Libraries. The author describes this database and its value-added service in the context of Rutgers University Libraries' information system. Discussion also includes its advantages and disadvantages, the changing roles and areas of responsibilities for subject selectors, and how the database fits into future developments of the library's information system. Serials Review 2002; 28:261–266.  相似文献   

8.

Regression spline smoothing is a popular approach for conducting nonparametric regression. An important issue associated with it is the choice of a "theoretically best" set of knots. Different statistical model selection methods, such as Akaike's information criterion and generalized cross-validation, have been applied to derive different "theoretically best" sets of knots. Typically these best knot sets are defined implicitly as the optimizers of some objective functions. Hence another equally important issue concerning regression spline smoothing is how to optimize such objective functions. In this article different numerical algorithms that are designed for carrying out such optimization problems are compared by means of a simulation study. Both the univariate and bivariate smoothing settings will be considered. Based on the simulation results, recommendations for choosing a suitable optimization algorithm under various settings will be provided.  相似文献   

9.
Abstract.  We propose a Bayesian semiparametric model for survival data with a cure fraction. We explicitly consider a finite cure time in the model, which allows us to separate the cured and the uncured populations. We take a mixture prior of a Markov gamma process and a point mass at zero to model the baseline hazard rate function of the entire population. We focus on estimating the cure threshold after which subjects are considered cured. We can incorporate covariates through a structure similar to the proportional hazards model and allow the cure threshold also to depend on the covariates. For illustration, we undertake simulation studies and a full Bayesian analysis of a bone marrow transplant data set.  相似文献   

10.
The confidence interval of the Kaplan–Meier estimate of the survival probability at a fixed time point is often constructed by the Greenwood formula. This normal approximation-based method can be looked as a Wald type confidence interval for a binomial proportion, the survival probability, using the “effective” sample size defined by Cutler and Ederer. Wald-type binomial confidence interval has been shown to perform poorly comparing to other methods. We choose three methods of binomial confidence intervals for the construction of confidence interval for survival probability: Wilson's method, Agresti–Coull's method, and higher-order asymptotic likelihood method. The methods of “effective” sample size proposed by Peto et al. and Dorey and Korn are also considered. The Greenwood formula is far from satisfactory, while confidence intervals based on the three methods of binomial proportion using Cutler and Ederer's “effective” sample size have much better performance.  相似文献   

11.
Two‐stage design is very useful in clinical trials for evaluating the validity of a specific treatment regimen. When the second stage is allowed to continue, the method used to estimate the response rate based on the results of both stages is critical for the subsequent design. The often‐used sample proportion has an evident upward bias. However, the maximum likelihood estimator or the moment estimator tends to underestimate the response rate. A mean‐square error weighted estimator is considered here; its performance is thoroughly investigated via Simon's optimal and minimax designs and Shuster's design. Compared with the sample proportion, the proposed method has a smaller bias, and compared with the maximum likelihood estimator, the proposed method has a smaller mean‐square error. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, an algorithm for generating random matrices with orthonormal columns is introduced. As pointed out by a referee, the algorithm is almost identical to Wedderburn's (1975) unpublished method. The method can also be considered as an extension of Stewart's (1980) method, which was designed to generate random orthogonal matrices. It is found outperforming a simple extension of the QR factorization method and that of Heiberger's (1978) method. This paper also demonstrates how the algorithm can be used in generating multivariate normal variates with given sample mean and sample covariance matrix.  相似文献   

13.
Mood's test, which is a relatively old test (and the oldest non‐parametric test among those tests in its class) for determining heterogeneity of variance, is still being widely used in different areas such as biometry, biostatistics and medicine. Although it is a popular test, it is not suitable for use on a two‐way factorial design. In this paper, Mood's test is generalised to the 2 × 2 factorial design setting and its performance is compared with that of Klotz's test. The power and robustness of these tests are examined in detail by means of a simulation study with 10,000 replications. Based on the simulation results, the generalised Mood's and Klotz's tests can especially be recommended in settings in which the parent distribution is symmetric. As an example application we analyse data from a multi‐factor agricultural system that involves chilli peppers, nematodes and yellow nutsedge. This example dataset suggests that the performance of the generalised Mood test is in agreement with that of the generalised Klotz's test.  相似文献   

14.
In survival data analysis it is frequent the occurrence of a significant amount of censoring to the right indicating that there may be a proportion of individuals in the study for which the event of interest will never happen. This fact is not considered by the ordinary survival theory. Consequently, the survival models with a cure fraction have been receiving a lot of attention in the recent years. In this article, we consider the standard mixture cure rate model where a fraction p 0 of the population is of individuals cured or immune and the remaining 1 ? p 0 are not cured. We assume an exponential distribution for the survival time and an uniform-exponential for the censoring time. In a simulation study, the impact caused by the informative uniform-exponential censoring on the coverage probabilities and lengths of asymptotic confidence intervals is analyzed by using the Fisher information and observed information matrices.  相似文献   

15.
The most popular method for trying to detect an association between two random variables is to test H 0 ?:?ρ=0, the hypothesis that Pearson's correlation is equal to zero. It is well known, however, that Pearson's correlation is not robust, roughly meaning that small changes in any distribution, including any bivariate normal distribution as a special case, can alter its value. Moreover, the usual estimate of ρ, r, is sensitive to only a few outliers which can mask a true association. A simple alternative to testing H 0 ?:?ρ =0 is to switch to a measure of association that guards against outliers among the marginal distributions such as Kendall's tau, Spearman's rho, a Winsorized correlation, or a so-called percentage bend correlation. But it is known that these methods fail to take into account the overall structure of the data. Many measures of association that do take into account the overall structure of the data have been proposed, but it seems that nothing is known about how they might be used to detect dependence. One such measure of association is selected, which is designed so that under bivariate normality, its estimator gives a reasonably accurate estimate of ρ. Then methods for testing the hypothesis of a zero correlation are studied.  相似文献   

16.
The classic recursive bivariate probit model is of particular interest to researchers since it allows for the estimation of the treatment effect that a binary endogenous variable has on a binary outcome in the presence of unobservables. In this article, the authors consider the semiparametric version of this model and introduce a model fitting procedure which permits to estimate reliably the parameters of a system of two binary outcomes with a binary endogenous regressor and smooth functions of continuous covariates. They illustrate the empirical validity of the proposal through an extensive simulation study. The approach is applied to data from a survey, conducted in Botswana, on the impact of education on women's fertility. Some studies suggest that the estimated effect could have been biased by the possible endogeneity arising because unobservable confounders (e.g., ability and motivation) are associated with both fertility and education. The Canadian Journal of Statistics 39: 259–279; 2011 © 2011 Statistical Society of Canada  相似文献   

17.
Traditional multiple hypothesis testing procedures fix an error rate and determine the corresponding rejection region. In 2002 Storey proposed a fixed rejection region procedure and showed numerically that it can gain more power than the fixed error rate procedure of Benjamini and Hochberg while controlling the same false discovery rate (FDR). In this paper it is proved that when the number of alternatives is small compared to the total number of hypotheses, Storey's method can be less powerful than that of Benjamini and Hochberg. Moreover, the two procedures are compared by setting them to produce the same FDR. The difference in power between Storey's procedure and that of Benjamini and Hochberg is near zero when the distance between the null and alternative distributions is large, but Benjamini and Hochberg's procedure becomes more powerful as the distance decreases. It is shown that modifying the Benjamini and Hochberg procedure to incorporate an estimate of the proportion of true null hypotheses as proposed by Black gives a procedure with superior power.  相似文献   

18.
Sample size and correlation coefficient of populations are the most important factors which influence the statistical significance of the sample correlation coefficient. It is observed that for evaluating the hypothesis when the observed value of the correlation coefficient's r is different from zero, Fisher's Z transformation may be incorrect for small samples especially when population correlation coefficient ρ has big values. In this study, a simulation program has been generated for to illustrate how the bias in the Fisher transformation of the correlation coefficient affects estimate precision when sample size is small and ρ has big value. By the simulation results, 90 and 95% confidence intervals of correlation coefficients have been created and tabled. As a result, it is suggested that especially when ρ is greater than 0.2 and sample sizes of 18 or less, Tables 1 and 2 can be used for the significance test in correlations.  相似文献   

19.
This paper describes an innovative application of statistical process control to the online remote control of the UK's gas transportation networks. The gas industry went through a number of changes in ownership, regulation, access to networks, organization and management culture in the 1990s. The application of SPC was motivated by these changes along with the desire to apply the best industrial statistics theory to practical problems. The work was initiated by a studentship, with the technology gradually being transferred to the industry. The combined efforts of control engineers and statisticians helped develop a novel SPC system. Having set up the control limits, a system was devised to automatically update and publish the control charts on a daily basis. The charts and an associated discussion forum are available to both managers and control engineers throughout the country at their desktop PCs. The paper describes methods of involving people to design first-class systems to achieve continual process improvement. It describes how the traditional benefits of SPC can be realized in a 'distal team working', and 'soft systems', context of four Area Control Centres, controlling a system delivering two thirds of the UK's energy needs.  相似文献   

20.
As known, the least-squares estimator of the slope of a univariate linear model sets to zero the covariance between the regression residuals and the values of the explanatory variable. To prevent the estimation process from being influenced by outliers, which can be theoretically modelled by a heavy-tailed distribution for the error term, one can substitute covariance with some robust measures of association, for example Kendall's tau in the popular Theil–Sen estimator. In a scarcely known Italian paper, Cifarelli [(1978), ‘La Stima del Coefficiente di Regressione Mediante l'Indice di Cograduazione di Gini’, Rivista di matematica per le scienze economiche e sociali, 1, 7–38. A translation into English is available at http://arxiv.org/abs/1411.4809 and will appear in Decisions in Economics and Finance] shows that a gain of efficiency can be obtained by using Gini's cograduation index instead of Kendall's tau. This paper introduces a new estimator, derived from another association measure recently proposed. Such a measure is strongly related to Gini's cograduation index, as they are both built to vanish in the general framework of indifference. The newly proposed estimator is shown to be unbiased and asymptotically normally distributed. Moreover, all considered estimators are compared via their asymptotic relative efficiency and a small simulation study. Finally, some indications about the performance of the considered estimators in the presence of contaminated normal data are provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号