首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a method for assessing an individual patient's risk of a future clinical event using clinical trial or cohort data and Cox proportional hazards regression, combining the information from several studies using meta-analysis techniques. The method combines patient-specific estimates of the log cumulative hazard across studies, weighting by the relative precision of the estimates, using either fixed- or random-effects meta-analysis calculations. Risk assessment can be done for any future patient using a few key summary statistics determined once and for all from each study. Generalizations of the method to logistic regression and linear models are immediate. We evaluate the methods using simulation studies and illustrate their application using real data.  相似文献   

2.
Summary.  Multivariate meta-analysis allows the joint synthesis of summary estimates from multiple end points and accounts for their within-study and between-study correlation. Yet practitioners usually meta-analyse each end point independently. I examine the role of within-study correlation in multivariate meta-analysis, to elicit the consequences of ignoring it. Using analytic reasoning and a simulation study, the within-study correlation is shown to influence the 'borrowing of strength' across end points, and wrongly ignoring it gives meta-analysis results with generally inferior statistical properties; for example, on average it increases the mean-square error and standard error of pooled estimates, and for non-ignorable missing data it increases their bias. The influence of within-study correlation is only negligible when the within-study variation is small relative to the between-study variation, or when very small differences exist across studies in the within-study covariance matrices. The findings are demonstrated by applied examples within medicine, dentistry and education. Meta-analysts are thus encouraged to account for the correlation between end points. To facilitate this, I conclude by reviewing options for multivariate meta-analysis when within-study correlations are unknown; these include obtaining individual patient data, using external information, performing sensitivity analyses and using alternatively parameterized models.  相似文献   

3.
Efficient inference for regression models requires that the heteroscedasticity be taken into account. We consider statistical inference under heteroscedasticity in a semiparametric measurement error regression model, in which some covariates are measured with errors. This paper has multiple components. First, we propose a new method for testing the heteroscedasticity. The advantages of the proposed method over the existing ones are that it does not need any nonparametric estimation and does not involve any mismeasured variables. Second, we propose a new two-step estimator for the error variances if there is heteroscedasticity. Finally, we propose a weighted estimating equation-based estimator (WEEBE) for the regression coefficients and establish its asymptotic properties. Compared with existing estimators, the proposed WEEBE is asymptotically more efficient, avoids undersmoothing the regressor functions and requires less restrictions on the observed regressors. Simulation studies show that the proposed test procedure and estimators have nice finite sample performance. A real data set is used to illustrate the utility of our proposed methods.  相似文献   

4.
In this paper we consider the statistical analysis of multivariate multiple nonlinear regression models with correlated errors, using Finite Fourier Transforms. Consistency and asymptotic normality of the weighted least squares estimates are established under various conditions on the regressor variables. These conditions involve different types of scalings, and the scaling factors are obtained explicitly for various types of nonlinear regression models including an interesting model which requires the estimation of unknown frequencies. The estimation of frequencies is a classical problem occurring in many areas like signal processing, environmental time series, astronomy and other areas of physical sciences. We illustrate our methodology using two real data sets taken from geophysics and environmental sciences. The data we consider from geophysics are polar motion (which is now widely known as “Chandlers Wobble”), where one has to estimate the drift parameters, the offset parameters and the two periodicities associated with elliptical motion. The data were first analyzed by Arato, Kolmogorov and Sinai who treat it as a bivariate time series satisfying a finite order time series model. They estimate the periodicities using the coefficients of the fitted models. Our analysis shows that the two dominant frequencies are 12 h and 410 days. The second example, we consider is the minimum/maximum monthly temperatures observed at the Antarctic Peninsula (Faraday/Vernadsky station). It is now widely believed that over the past 50 years there is a steady warming in this region, and if this is true, the warming has serious consequences on ecology, marine life, etc. as it can result in melting of ice shelves and glaciers. Our objective here is to estimate any existing temperature trend in the data, and we use the nonlinear regression methodology developed here to achieve that goal.  相似文献   

5.
Summary.  We present models for the combined analysis of evidence from randomized controlled trials categorized as being at either low or high risk of bias due to a flaw in their conduct. We formulate a bias model that incorporates between-study and between-meta-analysis heterogeneity in bias, and uncertainty in overall mean bias. We obtain algebraic expressions for the posterior distribution of the bias-adjusted treatment effect, which provide limiting values for the information that can be obtained from studies at high risk of bias. The parameters of the bias model can be estimated from collections of previously published meta-analyses. We explore alternative models for such data, and alternative methods for introducing prior information on the bias parameters into a new meta-analysis. Results from an illustrative example show that the bias-adjusted treatment effect estimates are sensitive to the way in which the meta-epidemiological data are modelled, but that using point estimates for bias parameters provides an adequate approximation to using a full joint prior distribution. A sensitivity analysis shows that the gain in precision from including studies at high risk of bias is likely to be low, however numerous or large their size, and that little is gained by incorporating such studies, unless the information from studies at low risk of bias is limited. We discuss approaches that might increase the value of including studies at high risk of bias, and the acceptability of the methods in the evaluation of health care interventions.  相似文献   

6.
In first-level analyses of functional magnetic resonance imaging data, adjustments for temporal correlation as a Satterthwaite approximation or a prewhitening method are usually implemented in the univariate model to keep the nominal test level. In doing so, the temporal correlation structure of the data is estimated, assuming an autoregressive process of order one.We show that this is applicable in multivariate approaches too - more precisely in the so-called stabilized multivariate test statistics. Furthermore, we propose a block-wise permutation method including a random shift that renders an approximation of the temporal correlation structure unnecessary but also approximately keeps the nominal test level in spite of the dependence of sample elements.Although the intentions are different, a comparison of the multivariate methods with the multiple ones shows that the global approach may achieve advantages if applied to suitable regions of interest. This is illustrated using an example from fMRI studies.  相似文献   

7.
In this article we study the problem of classification of three-level multivariate data, where multiple qq-variate observations are measured on uu-sites and over pp-time points, under the assumption of multivariate normality. The new classification rules with certain structured and unstructured mean vectors and covariance structures are very efficient in small sample scenario, when the number of observations is not adequate to estimate the unknown variance–covariance matrix. These classification rules successfully model the correlation structure on successive repeated measurements over time. Computation algorithms for maximum likelihood estimates of the unknown population parameters are presented. Simulation results show that the introduction of sites in the classification rules improves their performance over the existing classification rules without the sites.  相似文献   

8.
Scientific experiments commonly result in clustered discrete and continuous data. Existing methods for analyzing such data include the use of quasi-likelihood procedures and generalized estimating equations to estimate marginal mean response parameters. In applications to areas such as developmental toxicity studies, where discrete and continuous measurements are recorded on each fetus, or clinical ophthalmologic trials, where different types of observations are made on each eye, the assumption that data within cluster are exchangeable is often very reasonable. We use this assumption to formulate fully parametric regression models for clusters of bivariate data with binary and continuous components. The regression models proposed have marginal interpretations and reproducible model structures. Tractable expressions for likelihood equations are derived and iterative schemes are given for computing efficient estimates (MLEs) of the marginal mean, correlations, variances and higher moments. We demonstrate the use the ‘exchangeable’ procedure with an application to a developmental toxicity study involving fetal weight and malformation data.  相似文献   

9.
Non-central chi-squared distribution plays a vital role in statistical testing procedures. Estimation of the non-centrality parameter provides valuable information for the power calculation of the associated test. We are interested in the statistical inference property of the non-centrality parameter estimate based on one observation (usually a summary statistic) from a truncated chi-squared distribution. This work is motivated by the application of the flexible two-stage design in case–control studies, where the sample size needed for the second stage of a two-stage study can be determined adaptively by the results of the first stage. We first study the moment estimate for the truncated distribution and prove its existence, uniqueness, and inadmissibility and convergence properties. We then define a new class of estimates that includes the moment estimate as a special case. Among this class of estimates, we recommend to use one member that outperforms the moment estimate in a wide range of scenarios. We also present two methods for constructing confidence intervals. Simulation studies are conducted to evaluate the performance of the proposed point and interval estimates.  相似文献   

10.
Multivariate associated kernel estimators, which depend on both target point and bandwidth matrix, are appropriate for distributions with partially or totally bounded supports and generalize the classical ones such as the Gaussian. Previous studies on multivariate associated kernels have been restricted to products of univariate associated kernels, also considered having diagonal bandwidth matrices. However, it has been shown in classical cases that, for certain forms of target density such as multimodal ones, the use of full bandwidth matrices offers the potential for significantly improved density estimation. In this paper, general associated kernel estimators with correlation structure are introduced. Asymptotic properties of these estimators are presented; in particular, the boundary bias is investigated. Generalized bivariate beta kernels are handled in more details. The associated kernel with a correlation structure is built with a variant of the mode-dispersion method and two families of bandwidth matrices are discussed using the least squared cross validation method. Simulation studies are done. In the particular situation of bivariate beta kernels, a very good performance of associated kernel estimators with correlation structure is observed compared to the diagonal case. Finally, an illustration on a real dataset of paired rates in a framework of political elections is presented.  相似文献   

11.
Multivariate data arise frequently in biomedical and health studies where multiple response variables are collected across subjects. Unlike a univariate procedure fitting each response separately, a multivariate regression model provides a unique opportunity in studying the joint evolution of various response variables. In this paper, we propose two estimation procedures that improve estimation efficiency for the regression parameter by accommodating correlations among the response variables. The proposed procedures do not require knowledge of the true correlation structure nor does it estimate the parameters associated with the correlation. Theoretical and simulation results confirm that the proposed estimators are more efficient than the one obtained from the univariate approach. We further propose simple and powerful inference procedures for a goodness-of-fit test that possess the chi-squared asymptotic properties. Extensive simulation studies suggest that the proposed tests are more powerful than the Wald test based on the univariate procedure. The proposed methods are also illustrated through the mother’s stress and children’s morbidity study.  相似文献   

12.
We propose a new type of multivariate statistical model that permits non‐Gaussian distributions as well as the inclusion of conditional independence assumptions specified by a directed acyclic graph. These models feature a specific factorisation of the likelihood that is based on pair‐copula constructions and hence involves only univariate distributions and bivariate copulas, of which some may be conditional. We demonstrate maximum‐likelihood estimation of the parameters of such models and compare them to various competing models from the literature. A simulation study investigates the effects of model misspecification and highlights the need for non‐Gaussian conditional independence models. The proposed methods are finally applied to modeling financial return data. The Canadian Journal of Statistics 40: 86–109; 2012 © 2012 Statistical Society of Canada  相似文献   

13.
This paper investigates statistical issues that arise in interlaboratory studies known as Key Comparisons when one has to link several comparisons to or through existing studies. An approach to the analysis of such a data is proposed using Gaussian distributions with heterogeneous variances. We develop conditions for the set of sufficient statistics to be complete and for the uniqueness of uniformly minimum variance unbiased estimators (UMVUE) of the contrast parametric functions. New procedures are derived for estimating these functions with estimates of their uncertainty. These estimates lead to associated confidence intervals for the laboratories (or studies) contrasts. Several examples demonstrate statistical inference for contrasts based on linkage through the pilot laboratories. Monte Carlo simulation results on performance of approximate confidence intervals are also reported.  相似文献   

14.
Collapsibility with respect to a measure of association implies that the measure of association can be obtained from the marginal model. We first discuss model collapsibility and collapsibility with respect to regression coefficients for linear regression models. For parallel regression models, we give simple and different proofs of some of the known results and obtain also certain new results. For random coefficient regression models, we define (average) AA-collapsibility and obtain conditions under which it holds. We consider Poisson regression and logistic regression models also, and derive conditions for collapsibility and AA-collapsibility, respectively. These results generalize some of the results available in the literature. Some suitable examples are also discussed.  相似文献   

15.
Density estimates that are expressible as the product of a base density function and a linear combination of orthogonal polynomials are considered in this paper. More specifically, two criteria are proposed for determining the number of terms to be included in the polynomial adjustment component and guidelines are suggested for the selection of a suitable base density function. A simulation study reveals that these stopping rules produce density estimates that are generally more accurate than kernel density estimates or those resulting from the application of the Kronmal–Tarter criterion. Additionally, it is explained that the same approach can be utilized to obtain multivariate density estimates. The proposed orthogonal polynomial density estimation methodology is applied to several univariate and bivariate data sets, some of which have served as benchmarks in the statistical literature on density estimation.  相似文献   

16.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

17.
Estimating parameters in a stochastic volatility (SV) model is a challenging task. Among other estimation methods and approaches, efficient simulation methods based on importance sampling have been developed for the Monte Carlo maximum likelihood estimation of univariate SV models. This paper shows that importance sampling methods can be used in a general multivariate SV setting. The sampling methods are computationally efficient. To illustrate the versatility of this approach, three different multivariate stochastic volatility models are estimated for a standard data set. The empirical results are compared to those from earlier studies in the literature. Monte Carlo simulation experiments, based on parameter estimates from the standard data set, are used to show the effectiveness of the importance sampling methods.  相似文献   

18.
Conditional probability distributions have been commonly used in modeling Markov chains. In this paper we consider an alternative approach based on copulas to investigate Markov-type dependence structures. Based on the realization of a single Markov chain, we estimate the parameters using one- and two-stage estimation procedures. We derive asymptotic properties of the marginal and copula parameter estimators and compare performance of the estimation procedures based on Monte Carlo simulations. At low and moderate dependence structures the two-stage estimation has comparable performance as the maximum likelihood estimation. In addition we propose a parametric pseudo-likelihood ratio test for copula model selection under the two-stage procedure. We apply the proposed methods to an environmental data set.  相似文献   

19.
Forecasting with longitudinal data has been rarely studied. Most of the available studies are for continuous response and all of them are for univariate response. In this study, we consider forecasting multivariate longitudinal binary data. Five different models including simple ones, univariate and multivariate marginal models, and complex ones, marginally specified models, are studied to forecast such data. Model forecasting abilities are illustrated via a real-life data set and a simulation study. The simulation study includes a model independent data generation to provide a fair environment for model competitions. Independent variables are forecast as well as the dependent ones to mimic the real-life cases best. Several accuracy measures are considered to compare model forecasting abilities. Results show that complex models yield better forecasts.  相似文献   

20.
The majority of the existing literature on model-based clustering deals with symmetric components. In some cases, especially when dealing with skewed subpopulations, the estimate of the number of groups can be misleading; if symmetric components are assumed we need more than one component to describe an asymmetric group. Existing mixture models, based on multivariate normal distributions and multivariate t distributions, try to fit symmetric distributions, i.e. they fit symmetric clusters. In the present paper, we propose the use of finite mixtures of the normal inverse Gaussian distribution (and its multivariate extensions). Such finite mixture models start from a density that allows for skewness and fat tails, generalize the existing models, are tractable and have desirable properties. We examine both the univariate case, to gain insight, and the multivariate case, which is more useful in real applications. EM type algorithms are described for fitting the models. Real data examples are used to demonstrate the potential of the new model in comparison with existing ones.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号