首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Some governments rely on centralized, official sets of population forecasts for planning capital facilities. But the nature of population forecasting, as well as the milieu of government forecasting in general, can lead to the creation of extrapolative forecasts not well suited to long-range planning. This report discusses these matters, and suggests that custom-made forecasts and the use of forecast guidelines and a review process stressing forecast assumption justification may be a more realistic basis for planning individual facilities than general-purpose, official forecasts.  相似文献   

3.
4.
5.
A prognostic index (PI) is usually derived from a regression model as a weighted mean of the covariates, with weights (partial scores) proportional to the parameter estimates. When a PI is applied to patients other than those considered for its development, the issue of assessing its validity on the new case series is crucial. For this purpose, Van Houwelingen (2000) proposed a method of validation by calibration, which limits overfitting by embedding the original model into a new one, so that only a few parameters will have to be estimated. Here we address the problem of PI validation and revision with the above approach when the PI has classification purposes and it represents the linear predictor of a Weibull model, derived from an accelerated failure time parameterization instead of a proportional hazards one, as originally described by Van Houwelingen. We show that the Van Houwelingen method can be applied in a straightforward manner, provided that the parameterization originally used in the PI model is appropriately taken into account. We also show that model validation and revision can be carried out by modifying the cut-off values used for prognostic grouping without affecting the partial scores of the original PI. This procedure can be applied to simplify the clinician's use of an established PI for classification purposes.  相似文献   

6.
7.
Few topics have stirred as much discussion and controversy as randomization. A reading of the literature suggests that clinical trialists generally feel randomization is necessary for valid inference, while biostatisticians using model-based inference often appear to prefer nearly optimal designs irrespective of any induced randomness. Dissection of the methods of treatment assignment shows that there are five basic approaches; pure randomizers, true randomizers, quasi-randomizers, permutation testers, and conventional modelers. Four of these have coherent design and analysis strategies, even though they are not mutually consistent, but the fifth and most prevalent approach (quasi-randomization) has little to recommend it. Design-adaptive allocation is defined, it is shown to provide valid inference, and a simulation indicates its efficiency advantage. In small studies, or large studies with many important prognostic covariates or analytic subgroups, design-adaptive allocation is an extremely attractive method of treatment assignment.  相似文献   

8.
9.
10.
11.
This paper surveys the fundamental principles of subjective Bayesian inference in econometrics and the implementation of those principles using posterior simulation methods. The emphasis is on the combination of models and the development of predictive distributions. Moving beyond conditioning on a fixed number of completely specified models, the paper introduces subjective Bayesian tools for formal comparison of these models with as yet incompletely specified models. The paper then shows how posterior simulators can facilitate communication between investigators (for example, econometricians) on the one hand and remote clients (for example, decision makers) on the other, enabling clients to vary the prior distributions and functions of interest employed by investigators. A theme of the paper is the practicality of subjective Bayesian methods. To this end, the paper describes publicly available software for Bayesian inference, model development, and communication and provides illustrations using two simple econometric models.  相似文献   

12.
A semicompeting risks problem involves two-types of events: a nonterminal and a terminal event (death). Typically, the nonterminal event is the focus of the study, but the terminal event can preclude the occurrence of the nonterminal event. Semicompeting risks are ubiquitous in studies of aging. Examples of semicompeting risk dyads include: dementia and death, frailty syndrome and death, disability and death, and nursing home placement and death. Semicompeting risk models can be divided into two broad classes: models based only on observables quantities (class \(\mathcal {O}\) ) and those based on potential (latent) failure times (class \(\mathcal {L}\) ). The classical illness-death model belongs to class \(\mathcal {O}\) . This model is a special case of the multistate models, which has been an active area of methodology development. During the past decade and a half, there has also been a flurry of methodological activity on semicompeting risks based on latent failure times ( \(\mathcal {L}\) models). These advances notwithstanding, the semicompeting risks methodology has not penetrated biomedical research, in general, and gerontological research, in particular. Some possible reasons for this lack of uptake are: the methods are relatively new and sophisticated, conceptual problems associated with potential failure time models are difficult to overcome, paucity of expository articles aimed at educating practitioners, and non-availability of readily usable software. The main goals of this review article are: (i) to describe the major types of semicompeting risks problems arising in aging research, (ii) to provide a brief survey of the semicompeting risks methods, (iii) to suggest appropriate methods for addressing the problems in aging research, (iv) to highlight areas where more work is needed, and (v) to suggest ways to facilitate the uptake of the semicompeting risks methodology by the broader biomedical research community.  相似文献   

13.
14.
15.
It is generally considered that analysis of variance by maximum likelihood or its variants is computationally impractical, despite existing techniques for reducing computational effect per iteration and for reducing the number of iterations to convergence. This paper shows thata major reduction in the overall computational effort can be achieved through the use of sparse-matrix algorithms that take advantage of the factorial designs that characterize most applications of large analysis-of-variance problems. In this paper, an algebraic structure for factorial designsis developed. Through this structure, it is shown that the required computations can be arranged so that sparse-matrix methods result in greatly reduced storage and time requirements.  相似文献   

16.
When data sets are multilevel (group nesting or repeated measures), different sources of variations must be identified. In the framework of unsupervised analyses, multilevel simultaneous component analysis (MSCA) has recently been proposed as the most satisfactory option for analyzing multilevel data. MSCA estimates submodels for the different levels in data and thereby separates the “within”-subject and “between”-subject variations in the variables. Following the principles of MSCA and the strategy of decomposing the available data matrix into orthogonal blocks, and taking into account the between- and the within data structures, we generalize, in a multilevel perspective, multivariate models in which a matrix of response variables can be used to guide the projections (formed by responses predicted by explanatory variables or by a limited number of their combinations/composites) into choices of meaningful directions. To this end, the current paper proposes the multilevel version of the multivariate regression model and dimensionality-reduction methods (used to predict responses with fewer linear composites of explanatory variables). The principle findings of the study are that the minimization of the loss functions related to multivariate regression, principal-component regression, reduced-rank regression, and canonical-correlation regression are equivalent to the separate minimization of the sum of two separate loss functions corresponding to the between and within structures, under some constraints. The paper closes with a case study of an application focusing on the relationships between mental health severity and the intensity of care in the Lombardy region mental health system.  相似文献   

17.
18.
19.
20.
A class of weighted bootstrap techniques, called biased bootstrap or b-bootstrap methods, is introduced. It is motivated by the need to adjust empirical methods, such as the 'uniform' bootstrap, in a surgical way to alter some of their features while leaving others unchanged. Depending on the nature of the adjustment, the b-bootstrap can be used to reduce bias, or to reduce variance or to render some characteristic equal to a predetermined quantity. Examples of the last application include a b-bootstrap approach to hypothesis testing in nonparametric contexts, where the b-bootstrap enables simulation 'under the null hypothesis', even when the hypothesis is false, and a b-bootstrap competitor to Tibshirani's variance stabilization method. An example of the bias reduction application is adjustment of Nadaraya–Watson kernel estimators to make them competitive with local linear smoothing. Other applications include density estimation under constraints, outlier trimming, sensitivity analysis, skewness or kurtosis reduction and shrinkage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号