首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
For right‐censored survival data, it is well‐known that the mean survival time can be consistently estimated when the support of the censoring time contains the support of the survival time. In practice, however, this condition can be easily violated because the follow‐up of a study is usually within a finite window. In this article, we show that the mean survival time is still estimable from a linear model when the support of some covariate(s) with non‐zero coefficient(s) is unbounded regardless of the length of follow‐up. This implies that the mean survival time can be well estimated when the support of linear predictor is wide in practice. The theoretical finding is further verified for finite samples by simulation studies. Simulations also show that, when both models are correctly specified, the linear model yields reasonable mean square prediction errors and outperforms the Cox model, particularly with heavy censoring and short follow‐up time.  相似文献   

2.
Transco is the main provider of gas transportation to domestic and commercial customers in mainland Britain. Gas arrives in Britain at a steady rate but is consumed with a distinct diurnal pattern. The safe and timely movement of gas from arrival at the beach in various places in Britain to delivery at burners is the main driver for System Operations. The movement of gas is meticulously controlled and monitored resulting in a mass of information on pressure, flow and temperature. Gas is stored temporarily in various storage vessels and is moved around the pipes and in and out of storage as demand dictates. Demand is mostly dictated by the weather and is therefore subject to much variation. Transco and its predecessors have been transporting gas for over 50 years and are very successful as judged by their excellent safety record and the continual delivery of gas. Nevertheless, the company wished to improve itself and make further use of the many measurements collected. SPC is ideal for improving communication and understanding through increased visibility of data. All companies have special issues to face when they implement SPC, and this paper describes the way these were dealt with in System Operations and the lessons learnt along the way. The first part describes how performance measures were chosen for investigation. It includes a novel use of correlation between output and day-to-day conditions, which was successfully turned into a measure to check the uncheckable. The second part is about the issues involved with early application of SPC when features of the system are still unexplained. SPC has helped enhance understanding of the complex transportation process, encouraged team work, improved performance and provided an objective means of decision making.  相似文献   

3.
Matsumoto and Yor [2001. An analogue of Pitman's 2M-X2M-X theorem for exponential Wiener functionals. Part II: the role of the GIG laws. Nagoya Math. J. 162, 65–86] discovered an interesting invariance property of a product of the generalized inverse Gaussian (GIG) and the gamma distributions. For univariate random variables or symmetric positive definite random matrices it is a characteristic property for this pair of distributions. It appears that for random vectors the Matsumoto–Yor property characterizes only very special families of multivariate GIG and gamma distributions: components of the respective random vectors are grouped into independent subvectors, each subvector having linearly dependent components. This complements the version of the multivariate Matsumoto–Yor property on trees and related characterization obtained in Massam and Weso?owski [2004. The Matsumoto–Yor property on trees. Bernoulli 10, 685–700].  相似文献   

4.
The author examines whether the unexpectedly high number of births recorded in Poland in 1982 and 1983 is evidence of a change in fertility patterns. It is suggested that the increase in the gross reproduction rate that occurred was due to lower standards of living and fewer opportunities to acquire material possessions or travel abroad as an alternative to having children. Some of the increase may also be due to new pro-natalist measures such as prolongation of paid leave of absence for mothers. The author suggests that the increase in fertility is temporary and that fertility will soon decline to its former level.  相似文献   

5.
6.
7.
In reliability and biometry, it is common practice to choose a failure model by first assessing the failure rate function subjectively, and then invoking the well known exponentiation formula. The derivation of this formula is based on the assumption that the underlying failure distribution be absolutely continuous. Thus, implicit in the above approach is the understanding that the selected failure distribution will be absolutely continuous. The purpose of this note is to point out that the absolute continuity may fail when the failure rate is assessed conditionally, and in particular when it is conditioned on certain types of covariates, called internal covariates. When such is the case, the exponentiation formula should not be used.  相似文献   

8.
Dynamic treatment strategies are designed to change treatments over time in response to intermediate outcomes. They can be deployed for primary treatment as well as for the introduction of adjuvant treatment or other treatment‐enhancing interventions. When treatment interventions are delayed until needed, more cost‐efficient strategies will result. Sequential multiple assignment randomized (SMAR) trials allow for unbiased estimation of the marginal effects of different sequences of history‐dependent treatment decisions. Because a single SMAR trial enables evaluation of many different dynamic regimes at once, it is naturally thought to require larger sample sizes than the parallel randomized trial. In this paper, we compare power between SMAR trials studying a regime, where treatment boosting enters when triggered by an observed event, versus the parallel design, where a treatment boost is consistently prescribed over the entire study period. In some settings, we found that the dynamic design yields the more efficient trial for the detection of treatment activity. We develop one particular trial to compare a dynamic nursing intervention with telemonitoring for the enhancement of medication adherence in epilepsy patients. To this end, we derive from the SMAR trial data either an average of conditional treatment effects (‘conditional estimator’) or the population‐averaged (‘marginal’) estimator of the dynamic regimes. Analytical sample size calculations for the parallel design and the conditional estimator are compared with simulated results for the population‐averaged estimator. We conclude that in specific settings, well‐chosen SMAR designs may require fewer data for the development of more cost‐efficient treatment strategies than parallel designs. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we investigate the properties of the optimal portfolio in the sense of maximizing the Sharpe ratio (SR) and develop a procedure for the calculation of the risk of this portfolio. This is achieved by constructing an optimal portfolio which minimizes the Value-at-Risk (VaR) and at the same time coincides with the tangent (market) portfolio on the efficient frontier which is related to the SR portfolio. The resulting significance level of the minimum VaR portfolio is then used to determine the risk of both the market portfolio and the corresponding SR portfolio. However, the expression of this significance level depends on the unknown parameters which have to be estimated in practice. It leads to an estimator of the significance level whose distributional properties are investigated in detail. Based on these results, a confidence interval for the suggested risk measure of the SR portfolio is constructed and applied to real data. Both theoretical and empirical findings document that the SR portfolio is very risky since the corresponding significance level is smaller than 90 % in most of the considered cases.  相似文献   

10.
Nonlinear structural equation modeling provides many advantages over analyses based on manifest variables only. Several approaches for the analysis of latent interaction effects have been developed within the last 15 years, including the partial least squares product indicator approach (PLS-PI), the constrained product indicator approach using the LISREL software (LISREL-PI), and the distribution-analytic latent moderated structural equations approach (LMS) using the Mplus program. An assumed advantage of PLS-PI is that it is able to deal with very large numbers of indicators, while LISREL-PI and LMS have not been investigated under such conditions. In a Monte Carlo study, the performance of LISREL-PI and LMS was compared to PLS-PI results previously reported in Chin et al. (2003) and Goodhue et al. (2007) for identical conditions. The latent interaction model included six indicator variables for the measurement of each latent predictor variable and the latent criterion, and sample size was N=100. The results showed that PLS-PI’s linear and interaction parameter estimates were downward biased, while parameter estimates were unbiased for LISREL-PI and LMS. True standard errors were smallest for PLS-PI, while the power to detect the latent interaction effect was higher for LISREL-PI and LMS. Compared to the symmetric distributions of interaction parameter estimates for LISREL-PI and LMS, PLS-PI showed a distribution that was symmetric for positive values, but included outlying negative estimates. Possible explanations for these findings are discussed.  相似文献   

11.
Point process models are a natural approach for modelling data that arise as point events. In the case of Poisson counts, these may be fitted easily as a weighted Poisson regression. Point processes lack the notion of sample size. This is problematic for model selection, because various classical criteria such as the Bayesian information criterion (BIC) are a function of the sample size, n, and are derived in an asymptotic framework where n tends to infinity. In this paper, we develop an asymptotic result for Poisson point process models in which the observed number of point events, m, plays the role that sample size does in the classical regression context. Following from this result, we derive a version of BIC for point process models, and when fitted via penalised likelihood, conditions for the LASSO penalty that ensure consistency in estimation and the oracle property. We discuss challenges extending these results to the wider class of Gibbs models, of which the Poisson point process model is a special case.  相似文献   

12.
13.
With more and better clinical data being captured outside of clinical studies and greater data sharing of clinical studies, external controls may become a more attractive alternative to randomized clinical trials (RCTs). Both industry and regulators recognize that in situations where a randomized study cannot be performed, external controls can provide the needed contextualization to allow a better interpretation of studies without a randomized control. It is also agreed that external controls will not fully replace RCTs as the gold standard for formal proof of efficacy in drug development and the yardstick of clinical research. However, it remains unclear in which situations conclusions about efficacy and a positive benefit/risk can reliably be based on the use of an external control. This paper will provide an overview on types of external control, their applications and the different sources of bias their use may incur, and discuss potential mitigation steps. It will also give recommendations on how the use of external controls can be justified.  相似文献   

14.
Singular spectrum analysis (SSA) is an increasingly popular and widely adopted filtering and forecasting technique which is currently exploited in a variety of fields. Given its increasing application and superior performance in comparison to other methods, it is pertinent to study and distinguish between the two forecasting variations of SSA. These are referred to as Vector SSA (SSA-V) and Recurrent SSA (SSA-R). The general notion is that SSA-V is more robust and provides better forecasts than SSA-R. This is especially true when faced with time series which are non-stationary and asymmetric, or affected by unit root problems, outliers or structural breaks. However, currently there exists no empirical evidence for proving the above notions or suggesting that SSA-V is better than SSA-R. In this paper, we evaluate out-of-sample forecasting capabilities of the optimised SSA-V and SSA-R forecasting algorithms via a simulation study and an application to 100 real data sets with varying structures, to provide a statistically reliable answer to the question of which SSA algorithm is best for forecasting at both short and long run horizons based on several important criteria.  相似文献   

15.
For interval estimation of a proportion, coverage probabilities tend to be too large for “exact” confidence intervals based on inverting the binomial test and too small for the interval based on inverting the Wald large-sample normal test (i.e., sample proportion ± z-score × estimated standard error). Wilson's suggestion of inverting the related score test with null rather than estimated standard error yields coverage probabilities close to nominal confidence levels, even for very small sample sizes. The 95% score interval has similar behavior as the adjusted Wald interval obtained after adding two “successes” and two “failures” to the sample. In elementary courses, with the score and adjusted Wald methods it is unnecessary to provide students with awkward sample size guidelines.  相似文献   

16.
The exact mean-squared error (MSE) of estimators of the variance in nonparametric regression based on quadratic forms is investigated. In particular, two classes of estimators are compared: Hall, Kay and Titterington's optimal difference-based estimators and a class of ordinary difference-based estimators which generalize methods proposed by Rice and Gasser, Sroka and Jennen-Steinmetz. For small sample sizes the MSE of the first estimator is essentially increased by the magnitude of the integrated first two squared derivatives of the regression function. It is shown that in many situations ordinary difference-based estimators are more appropriate for estimating the variance, because they control the bias much better and hence have a much better overall performance. It is also demonstrated that Rice's estimator does not always behave well. Data-driven guidelines are given to select the estimator with the smallest MSE.  相似文献   

17.
18.
"One can often gain insight into the aetiology of a disease by relating mortality rates in different areas to explanatory variables. Multiple regression techniques are usually employed, but unweighted least squares may be inappropriate if the areas vary in population size. Also, a fully weighted regression, with weights inversely proportional to binomial sampling variances, is usually too extreme. This paper proposes an intermediate solution via maximum likelihood which takes account of three sources of variation in death rates: sampling error, explanatory variables and unexplained differences between areas. The method is also adapted for logit (death rates), standardized mortality ratios (SMRs) and log (SMRs). Two [United Kingdom] examples are presented."  相似文献   

19.
It is commonly asserted that the Gibbs sampler is a special case of the Metropolis–Hastings (MH) algorithm. While this statement is true for certain Gibbs samplers, it is not true in general for the version that is taught and used most often, namely, the deterministic scan Gibbs sampler. In this note, I prove that that there exist deterministic scan Gibbs samplers that do not exhibit detailed balance and hence cannot be considered MH samplers. The nuances of various Gibbs sampling schemes are discussed.  相似文献   

20.
The central limit theorem indicates that when the sample size goes to infinite, the sampling distribution of means tends to follow a normal distribution; it is the basis for the most usual confidence interval and sample size formulas. This study analyzes what sample size is large enough to assume that the distribution of the estimator of a proportion follows a Normal distribution. Also, we propose the use of a correction factor in sample size formulas to ensure a confidence level even when the central limit theorem does not apply for these distributions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号