首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 991 毫秒
1.
It is shown in this article that, given the moments of a distribution, any percentage point can be accurately determined from an approximation of the corresponding density function in terms of the product of an appropriate baseline density and a polynomial adjustment. This approach, which is based on a moment-matching technique, is not only conceptually simple but easy to implement. As illustrated by several applications, the percentiles so obtained are in excellent agreement with the tabulated values. Whereas statistical tables, if at all available or accessible, can hardly ever cover all the potentially useful combinations of the parameters associated with a random quantity of interest, the proposed methodology has no such limitation.  相似文献   

2.
In longitudinal studies of biomarkers, an outcome of interest is the time at which a biomarker reaches a particular threshold. The CD4 count is a widely used marker of human immunodeficiency virus progression. Because of the inherent variability of this marker, a single CD4 count below a relevant threshold should be interpreted with caution. Several studies have applied persistence criteria, designating the outcome as the time to the occurrence of two consecutive measurements less than the threshold. In this paper, we propose a method to estimate the time to attainment of two consecutive CD4 counts less than a meaningful threshold, which takes into account the patient‐specific trajectory and measurement error. An expression for the expected time to threshold is presented, which is a function of the fixed effects, random effects and residual variance. We present an application to human immunodeficiency virus‐positive individuals from a seroprevalent cohort in Durban, South Africa. Two thresholds are examined, and 95% bootstrap confidence intervals are presented for the estimated time to threshold. Sensitivity analysis revealed that results are robust to truncation of the series and variation in the number of visits considered for most patients. Caution should be exercised when interpreting the estimated times for patients who exhibit very slow rates of decline and patients who have less than three measurements. We also discuss the relevance of the methodology to the study of other diseases and present such applications. We demonstrate that the method proposed is computationally efficient and offers more flexibility than existing frameworks. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Abstract. Time‐to‐pregnancy (TTP) is the duration from the time a couple starts trying to become pregnant until they succeed. It is considered one of the most direct methods to measure natural fecundity in humans. Statistical tools for designing and analysing time to pregnancy studies belong to survival analysis, but several features require special attention. Prospective designs are difficult to carry out and retrospective (pregnancy‐based) designs, being widely used in this area, do not allow efficiently including couples remaining childless. A third possible design starts from a cross‐sectional sample of couples currently trying to become pregnant, using current duration (backward recurrence time) as basis for the estimation of TTP. Regression analysis is then most conveniently carried out in the accelerated failure time model. This paper surveys some practical and technical‐statistical issues in implementing this approach in a large telephone‐based survey, the Epidemiological Observatory of Fecundity in France (Obseff).  相似文献   

4.
Three approaches to sequential analysis are reviewed: Chernoff's development of the Wald approach, the dynamic programming analysis developed by the author some years ago and a ‘path-averaging’ approach which exploits the random-walk properties of the log-posterior under a given hypothesis. These last two approaches led to explicit determinations of the optimal decision boundary and its associated costs in the limit of a small sampling cost, for a general number of hypotheses. However, the particular interest of the path-averaging approach is that it applies also to state-estimation for a hidden Markov model, where it leads to Eq. (39), which gives an immediate indication of the effectiveness with which the different states are estimated.  相似文献   

5.
Two methods for approximating the distribution of a noncentral random variable by a central distribution in the same family are presented. The first consists of relating a stochastic expansion of a random variable to a corresponding asymptotic expansion for its distribution function. The second approximates the cumulant generating function and is used to provide central χ2 and gamma approximations to the noncentral χ2 and gamma distributions.  相似文献   

6.
A method to replace a continuous univariate distribution with a discrete distribution that takes MN different values is analysed. Both distributions share the same r th moments for r =0, . . ., 2 N −1 and their corresonding distribution functions coincide at least at M +1 points. Several statistical and engineering examples are considered in which the discrete approximation may be used to avoid a simulation study that would be much more demanding computationally.  相似文献   

7.
In this article, we propose a parametric model for the distribution of time to first event when events are overdispersed and can be properly fitted by a Negative Binomial distribution. This is a very common situation in medical statistics, when the occurrence of events is summarized as a count for each patient and the simple Poisson model is not adequate to account for overdispersion of data. In this situation, studying the time of occurrence of the first event can be of interest. From the Negative Binomial distribution of counts, we derive a new parametric model for time to first event and apply it to fit the distribution of time to first relapse in multiple sclerosis (MS). We develop the regression model with methods for covariate estimation. We show that, as the Negative Binomial model properly fits relapse counts data, this new model matches quite perfectly the distribution of time to first relapse, as tested in two large datasets of MS patients. Finally we compare its performance, when fitting time to first relapse in MS, with other models widely used in survival analysis (the semiparametric Cox model and the parametric exponential, Weibull, log-logistic and log-normal models).  相似文献   

8.
Although several authors have indicated that the median test has low power in small samples, it continues to be presented in many statistical textbooks, included in a number of popular statistical software packages, and used in a variety of application areas. We present results of a power simulation study that shows that the median test has noticeably lower power, even for the double exponential distribution for which it is asymptotically most powerful, than other readily available rank tests. We suggest that the median test be “retired” from routine use and recommend alternative rank tests that have superior power over a relatively large family of symmetric distributions.  相似文献   

9.
10.
Mood's test, which is a relatively old test (and the oldest non‐parametric test among those tests in its class) for determining heterogeneity of variance, is still being widely used in different areas such as biometry, biostatistics and medicine. Although it is a popular test, it is not suitable for use on a two‐way factorial design. In this paper, Mood's test is generalised to the 2 × 2 factorial design setting and its performance is compared with that of Klotz's test. The power and robustness of these tests are examined in detail by means of a simulation study with 10,000 replications. Based on the simulation results, the generalised Mood's and Klotz's tests can especially be recommended in settings in which the parent distribution is symmetric. As an example application we analyse data from a multi‐factor agricultural system that involves chilli peppers, nematodes and yellow nutsedge. This example dataset suggests that the performance of the generalised Mood test is in agreement with that of the generalised Klotz's test.  相似文献   

11.
Consider the usual one-way fixed effect analysis of variance model where the populations Πi ( I = 0, 1, . . . , k ) have independent normal distributions with unknown means and common unknown variance. Let Π0 be a control population with which the other (treatment) populations are to be compared. The basic problem is to select the treatment that is closest to the control mean. This situation occurs when one of the Πi must be chosen, regardless of how many are equivalent to the control in the sense of having means sufficiently close. This paper follows the approach of Hsu (1996) and is based on a set of simultaneous confidence intervals. It provides a table of critical values which allows direct implementation of the new inference procedure. The applications given are of the balanced cross-over design type with negligible carry-over effects, for which the results of this paper may be used. One of the applications refers to the selection of a drug, which may not be bioequivalent to a reference formulation but is the closest of those drugs that are readily available to the group of patients considered.  相似文献   

12.
The aim of the paper is to characterize the factors that determine the transition from university to work as well as to evaluate the effectiveness of universities and course programmes with respect to the labour market outcomes of their graduates. The study is focused on the analysis of the time to obtain the first job, taking into account the graduates' characteristics and the effects pertaining to course programmes and universities. For this a three-level discrete time survival model is used, where the logit of the hazard—conditionally on the random effects at course programme and university level—is a linear function of the covariates. The analysis is accomplished by using a large data set from a survey on job opportunities for the 1992 Italian graduates.  相似文献   

13.
This paper estimates the causal impact of investment in information and communication technologies (ICT) on student performances in mathematics as measured in the Program for International Student Assessment (PISA) 2012 for Spain. To do this we apply a new methodology in this context known as Bayesian Additive Regression Trees that has important advantages over more standard parametric specifications. Results indicate that ICT has a moderate positive effect on math scores. In addition, we analyze how this effect interacts with variables related to school features and student socioeconomic status, finding that ICT investment is especially beneficial for students from a low socioeconomic background.  相似文献   

14.
SUSHI to Go     
Tim Jewell 《Serials Review》2013,39(3):153-154
Abstract

The Internet, Google, e-journals, packages, e-books and patron driven acquisitions have all been perceived as “a threat to libraries as we know them.” Yet, in spite of these developments and under the weight of chronic budget pressures, the typical academic library now offers more users better access to more content and services than ever before. In this session we will look at how librarians and the vendors that serve them have responded to these “threats” to their future to create new and improved services.  相似文献   

15.
The use of parametric linear mixed models and generalized linear mixed models to analyze longitudinal data collected during randomized control trials (RCT) is conventional. The application of these methods, however, is restricted due to various assumptions required by these models. When the number of observations per subject is sufficiently large, and individual trajectories are noisy, functional data analysis (FDA) methods serve as an alternative to parametric longitudinal data analysis techniques. However, the use of FDA in RCTs is rare. In this paper, the effectiveness of FDA and linear mixed models (LMMs) was compared by analyzing data from rural persons living with HIV and comorbid depression enrolled in a depression treatment randomized clinical trial. Interactive voice response systems were used for weekly administrations of the 10-item Self-Administered Depression Scale (SADS) over 41 weeks. Functional principal component analysis and functional regression analysis methods detected a statistically significant difference in SADS between telphone-administered interpersonal psychotherapy (tele-IPT) and controls but linear mixed effects model results did not. Additional simulation studies were conducted to compare FDA and LMMs under a different nonlinear trajectory assumption. In this clinical trial with sufficient per subject measured outcomes and individual trajectories that are noisy and nonlinear, we found FDA methods to be a better alternative to LMMs.  相似文献   

16.
In this paper we outline a class of fully parametric proportional hazards models, in which the baseline hazard is assumed to be a power transform of the time scale, corresponding to assuming that survival times follow a Weibull distribution. Such a class of models allows for the possibility of time varying hazard rates, but assumes a constant hazard ratio. We outline how Bayesian inference proceeds for such a class of models using asymptotic approximations which require only the ability to maximize the joint log posterior density. We apply these models to a clinical trial to assess the efficacy of neutron therapy compared to conventional treatment for patients with tumors of the pelvic region. In this trial there was prior information about the log hazard ratio both in terms of elicited clinical beliefs and the results of previous studies. Finally, we consider a number of extensions to this class of models, in particular the use of alternative baseline functions, and the extension to multi-state data.  相似文献   

17.
经济学运用数学的尺度   总被引:1,自引:0,他引:1  
胡伟清 《统计研究》2006,23(1):74-77
自威廉·配第在经济论文中最早运用数学以来,经济学与数学就结下了不解之缘。数学的应用,不仅给经济学研究带来了新的工具,也促进了经济学的发展。但对经济学“过度”运用数学的批评,一直以来没有间断。国内的批评自然是近儿年来的事,因为在上世纪九十年代以前,中国的经济学者基本上不用数学,所以也就不会有人批评。而国外的批评,则早已有之。当然,不管是谁的批评,都没有否定数学在经济学中的应用价值,而是认为,经济学应该合理利用数学,而不能“数学化”,不能“滥用”。有意思的是,对于这些批评的“反驳”,则似乎很少,即便是赞成数学化的学…  相似文献   

18.
In this paper, Anbar's (1983) approach for estimating a difference between two binomial proportions is discussed with respect to a hypothesis testing problem. Such an approach results in two possible testing strategies. While the results of the tests are expected to agree for a large sample size when two proportions are equal, the tests are shown to perform quite differently in terms of their probabilities of a Type I error for selected sample sizes. Moreover, the tests can lead to different conclusions, which is illustrated via a simple example; and the probability of such cases can be relatively large. In an attempt to improve the tests while preserving their relative simplicity feature, a modified test is proposed. The performance of this test and a conventional test based on normal approximation is assessed. It is shown that the modified Anbar's test better controls the probability of a Type I error for moderate sample sizes.  相似文献   

19.
When testing the equality of the means from two independent normally distributed populations given that the variances of the two populations are unknown but assumed equal, the classical Student's two-sample t-test is recommended. If the underlying population distributions are normal with unequal and unknown variances, either Welch's t-statistic or Satterthwaite's approximate F test is suggested. However, Welch's procedure is non-robust under most non-normal distributions. There is a variable tolerance level around the strict assumptions of data independence, homogeneity of variances, and identical and normal distributions. Few textbooks offer alternatives when one or more of the underlying assumptions are not defensible. While there are more than a few non-parametric (rank) procedures that provide alternatives to Student's t-test, we restrict this review to the promising alternatives to Student's two-sample t-test in non-normal models.  相似文献   

20.
中国经济向新常态转换的冲击影响机制研究   总被引:1,自引:0,他引:1  
本文分析了中国自2007年以来逐步向新常态经济转换的冲击影响机制。我们应用SV-TVP-VAR模型分析了2001Q1~2015Q3间以技术冲击和投资冲击代表的供求冲击对中国经济波动的动态影响机制,结果表明这两种冲击的影响机制都在2007年左右发生了结构性的变化,具体表现为:首先从影响的方向来看,投资冲击的短期影响为正但波动性加大,中长期的影响则变为负值且影响逐步增强,而无论是从短期还是中长期来看技术冲击对中国经济增长的正面影响逐步增强,但从2014年以来其影响有所下降;其次从影响的数量来看,分时段的方差分解表明2007年之后投资冲击对产出波动的解释力度大幅上升,而技术冲击的影响比较平稳。这些结论说明中国经济向新常态的转换主要源于需求侧的不利冲击,但最近以来供求冲击都呈现了不利影响的趋势,为此我们也提出了相应的政策建议。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号