首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We use the additive risk model of Aalen (Aalen, 1980) as a model for the rate of a counting process. Rather than specifying the intensity, that is the instantaneous probability of an event conditional on the entire history of the relevant covariates and counting processes, we present a model for the rate function, i.e., the instantaneous probability of an event conditional on only a selected set of covariates. When the rate function for the counting process is of Aalen form we show that the usual Aalen estimator can be used and gives almost unbiased estimates. The usual martingale based variance estimator is incorrect and an alternative estimator should be used. We also consider the semi-parametric version of the Aalen model as a rate model (McKeague and Sasieni, 1994) and show that the standard errors that are computed based on an assumption of intensities are incorrect and give a different estimator. Finally, we introduce and implement a test-statistic for the hypothesis of a time-constant effect in both the non-parametric and semi-parametric model. A small simulation study was performed to evaluate the performance of the new estimator of the standard error.  相似文献   

2.
Asymptotic theory for the Cox semi-Markov illness-death model   总被引:1,自引:1,他引:0  
Irreversible illness-death models are used to model disease processes and in cancer studies to model disease recovery. In most applications, a Markov model is assumed for the multistate model. When there are covariates, a Cox (1972, J Roy Stat Soc Ser B 34:187–220) model is used to model the effect of covariates on each transition intensity. Andersen et al. (2000, Stat Med 19:587–599) proposed a Cox semi-Markov model for this problem. In this paper, we study the large sample theory for that model and provide the asymptotic variances of various probabilities of interest. A Monte Carlo study is conducted to investigate the robustness and efficiency of Markov/Semi-Markov estimators. A real data example from the PROVA (1991, Hepatology 14:1016–1024) trial is used to illustrate the theory.  相似文献   

3.
We consider fitting Emax models to the primary endpoint for a parallel group dose–response clinical trial. Such models can be difficult to fit using Maximum Likelihood if the data give little information about the maximum possible response. Consequently, we consider alternative models that can be derived as limiting cases, which can usually be fitted. Furthermore we propose two model selection procedures for choosing between the different models. These model selection procedures are compared with two model selection procedures which have previously been used. In a simulation study we find that the model selection procedure that performs best depends on the underlying true situation. One of the new model selection procedures gives what may be regarded as the most robust of the procedures.  相似文献   

4.
A stationary bilinear (SB) model can be used to describe processes with a time-varying degree of persistence that depends on past shocks. This study develops methods for Bayesian inference, model comparison, and forecasting in the SB model. Using monthly U.K. inflation data, we find that the SB model outperforms the random walk, first-order autoregressive AR(1), and autoregressive moving average ARMA(1,1) models in terms of root mean squared forecast errors. In addition, the SB model is superior to these three models in terms of predictive likelihood for the majority of forecast observations.  相似文献   

5.
In the analysis of censored survival data Cox proportional hazards model (1972) is extremely popular among the practitioners. However, in many real-life situations the proportionality of the hazard ratios does not seem to be an appropriate assumption. To overcome such a problem, we consider a class of nonproportional hazards models known as generalized odds-rate class of regression models. The class is general enough to include several commonly used models, such as proportional hazards model, proportional odds model, and accelerated life time model. The theoretical and computational properties of these models have been re-examined. The propriety of the posterior has been established under some mild conditions. A simulation study is conducted and a detailed analysis of the data from a prostate cancer study is presented to further illustrate the proposed methodology.  相似文献   

6.
M-estimation is a widely used technique for robust statistical inference. In this paper, we study model selection and model averaging for M-estimation to simultaneously improve the coverage probability of confidence intervals of the parameters of interest and reduce the impact of heavy-tailed errors or outliers in the response. Under general conditions, we develop robust versions of the focused information criterion and a frequentist model average estimator for M-estimation, and we examine their theoretical properties. In addition, we carry out extensive simulation studies as well as two real examples to assess the performance of our new procedure, and find that the proposed method produces satisfactory results.  相似文献   

7.
Model‐based phase I dose‐finding designs rely on a single model throughout the study for estimating the maximum tolerated dose (MTD). Thus, one major concern is about the choice of the most suitable model to be used. This is important because the dose allocation process and the MTD estimation depend on whether or not the model is reliable, or whether or not it gives a better fit to toxicity data. The aim of our work was to propose a method that would remove the need for a model choice prior to the trial onset and then allow it sequentially at each patient's inclusion. In this paper, we described model checking approach based on the posterior predictive check and model comparison approach based on the deviance information criterion, in order to identify a more reliable or better model during the course of a trial and to support clinical decision making. Further, we presented two model switching designs for a phase I cancer trial that were based on the aforementioned approaches, and performed a comparison between designs with or without model switching, through a simulation study. The results showed that the proposed designs had the advantage of decreasing certain risks, such as those of poor dose allocation and failure to find the MTD, which could occur if the model is misspecified. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
We propose a Bayesian hierarchical model for multiple comparisons in mixed models where the repeated measures on subjects are described with the subject random effects. The model facilitates inferences in parameterizing the successive differences of the population means, and for them, we choose independent prior distributions that are mixtures of a normal distribution and a discrete distribution with its entire mass at zero. For the other parameters, we choose conjugate or vague priors. The performance of the proposed hierarchical model is investigated in the simulated and two real data sets, and the results illustrate that the proposed hierarchical model can effectively conduct a global test and pairwise comparisons using the posterior probability that any two means are equal. A simulation study is performed to analyze the type I error rate, the familywise error rate, and the test power. The Gibbs sampler procedure is used to estimate the parameters and to calculate the posterior probabilities.  相似文献   

9.
In this paper, we discuss the inference problem about the Box-Cox transformation model when one faces left-truncated and right-censored data, which often occur in studies, for example, involving the cross-sectional sampling scheme. It is well-known that the Box-Cox transformation model includes many commonly used models as special cases such as the proportional hazards model and the additive hazards model. For inference, a Bayesian estimation approach is proposed and in the method, the piecewise function is used to approximate the baseline hazards function. Also the conditional marginal prior, whose marginal part is free of any constraints, is employed to deal with many computational challenges caused by the constraints on the parameters, and a MCMC sampling procedure is developed. A simulation study is conducted to assess the finite sample performance of the proposed method and indicates that it works well for practical situations. We apply the approach to a set of data arising from a retirement center.  相似文献   

10.
从属性、构建方法及意义等方面,分析研究线性回归模型在计量经济学和统计学两学科视角下的差异,并根据这种差异进一步提出回归模型的基本设定思路。研究表明:识别这种差异是完成模型设定工作的基础性和必要性举措,有助于实现线性回归模型的正确设定。以经典例证对计量经济学和统计学回归模型在应用中的区别以及模型设定问题进行进一步展示和分析。  相似文献   

11.
The construction of a joint model for mixed discrete and continuous random variables that accounts for their associations is an important statistical problem in many practical applications. In this paper, we use copulas to construct a class of joint distributions of mixed discrete and continuous random variables. In particular, we employ the Gaussian copula to generate joint distributions for mixed variables. Examples include the robit-normal and probit-normal-exponential distributions, the first for modelling the distribution of mixed binary-continuous data and the second for a mixture of continuous, binary and trichotomous variables. The new class of joint distributions is general enough to include many mixed-data models currently available. We study properties of the distributions and outline likelihood estimation; a small simulation study is used to investigate the finite-sample properties of estimates obtained by full and pairwise likelihood methods. Finally, we present an application to discriminant analysis of multiple correlated binary and continuous data from a study involving advanced breast cancer patients.  相似文献   

12.
Compliance with one specified dosing strategy of assigned treatments is a common problem in randomized drug clinical trials. Recently, there has been much interest in methods used for analysing treatment effects in randomized clinical trials that are subject to non-compliance. In this paper, we estimate and compare treatment effects based on the Grizzle model (GM) (ignorable non-compliance) as the custom model and the generalized Grizzle model (GGM) (non-ignorable non-compliance) as the new model. A real data set based on the treatment of knee osteoarthritis is used to compare these models. The results based on the likelihood ratio statistics and simulation study show the advantage of the proposed model (GGM) over the custom model (GGM).  相似文献   

13.
This paper considers model averaging for the ordered probit and nested logit models, which are widely used in empirical research. Within the frameworks of these models, we examine a range of model averaging methods, including the jackknife method, which is proved to have an optimal asymptotic property in this paper. We conduct a large-scale simulation study to examine the behaviour of these model averaging estimators in finite samples, and draw comparisons with model selection estimators. Our results show that while neither averaging nor selection is a consistently better strategy, model selection results in the poorest estimates far more frequently than averaging, and more often than not, averaging yields superior estimates. Among the averaging methods considered, the one based on a smoothed version of the Bayesian Information criterion frequently produces the most accurate estimates. In three real data applications, we demonstrate the usefulness of model averaging in mitigating problems associated with the ‘replication crisis’ that commonly arises with model selection.  相似文献   

14.
When studying associations between a functional covariate and scalar response using a functional linear model (FLM), scientific knowledge may indicate possible monotonicity of the unknown parameter curve. In this context, we propose an F-type test of monotonicity, based on a full versus reduced nested model structure, where the reduced model with monotonically constrained parameter curve is nested within an unconstrained FLM. For estimation under the unconstrained FLM, we consider two approaches: penalised least-squares and linear mixed model effects estimation. We use a smooth then monotonise approach to estimate the reduced model, within the null space of monotone parameter curves. A bootstrap procedure is used to simulate the null distribution of the test statistic. We present a simulation study of the power of the proposed test, and illustrate the test using data from a head and neck cancer study.  相似文献   

15.
The main focus of our paper is to compare the performance of different model selection criteria used for multivariate reduced rank time series. We consider one of the most commonly used reduced rank model, that is, the reduced rank vector autoregression (RRVAR (p, r)) introduced by Velu et al. [Reduced rank models for multiple time series. Biometrika. 1986;7(31):105–118]. In our study, the most popular model selection criteria are included. The criteria are divided into two groups, that is, simultaneous selection and two-step selection criteria, accordingly. Methods from the former group select both an autoregressive order p and a rank r simultaneously, while in the case of two-step criteria, first an optimal order p is chosen (using model selection criteria intended for the unrestricted VAR model) and then an optimal rank r of coefficient matrices is selected (e.g. by means of sequential testing). Considered model selection criteria include well-known information criteria (such as Akaike information criterion, Schwarz criterion, Hannan–Quinn criterion, etc.) as well as widely used sequential tests (e.g. the Bartlett test) and the bootstrap method. An extensive simulation study is carried out in order to investigate the efficiency of all model selection criteria included in our study. The analysis takes into account 34 methods, including 6 simultaneous methods and 28 two-step approaches, accordingly. In order to carefully analyse how different factors affect performance of model selection criteria, we consider over 150 simulation settings. In particular, we investigate the influence of the following factors: time series dimension, different covariance structure, different level of correlation among components and different level of noise (variance). Moreover, we analyse the prediction accuracy concerned with the application of the RRVAR model and compare it with results obtained for the unrestricted vector autoregression. In this paper, we also present a real data application of model selection criteria for the RRVAR model using the Polish macroeconomic time series data observed in the period 1997–2007.  相似文献   

16.
Summary.  We propose to use calibrated imputation to compensate for missing values. This technique consists of finding final imputed values that are as close as possible to preliminary imputed values and are calibrated to satisfy constraints. Preliminary imputed values, potentially justified by an imputation model, are obtained through deterministic single imputation. Using appropriate constraints, the resulting imputed estimator is asymptotically unbiased for estimation of linear population parameters such as domain totals. A quasi-model-assisted approach is considered in the sense that inferences do not depend on the validity of an imputation model and are made with respect to the sampling design and a non-response model. An imputation model may still be used to generate imputed values and thus to improve the efficiency of the imputed estimator. This approach has the characteristic of handling naturally the situation where more than one imputation method is used owing to missing values in the variables that are used to obtain imputed values. We use the Taylor linearization technique to obtain a variance estimator under a general non-response model. For the logistic non-response model, we show that ignoring the effect of estimating the non-response model parameters leads to overestimating the variance of the imputed estimator. In practice, the overestimation is expected to be moderate or even negligible, as shown in a simulation study.  相似文献   

17.
In this paper, we develop a semiparametric regression model for longitudinal skewed data. In the new model, we allow the transformation function and the baseline function to be unknown. The proposed model can provide a much broader class of models than the existing additive and multiplicative models. Our estimators for regression parameters, transformation function and baseline function are asymptotically normal. Particularly, the estimator for the transformation function converges to its true value at the rate n ? 1 ∕ 2, the convergence rate that one could expect for a parametric model. In simulation studies, we demonstrate that the proposed semiparametric method is robust with little loss of efficiency. Finally, we apply the new method to a study on longitudinal health care costs.  相似文献   

18.
Ordinary differential equations (ODEs) are normally used to model dynamic processes in applied sciences such as biology, engineering, physics, and many other areas. In these models, the parameters are usually unknown, and thus they are often specified artificially or empirically. Alternatively, a feasible method is to estimate the parameters based on observed data. In this study, we propose a Bayesian penalized B-spline approach to estimate the parameters and initial values for ODEs used in epidemiology. We evaluated the efficiency of the proposed method based on simulations using the Markov chain Monte Carlo algorithm for the Kermack–McKendrick model. The proposed approach is also illustrated based on a real application to the transmission dynamics of hepatitis C virus in mainland China.  相似文献   

19.
The broken stick model is a model of the abundance of species in a habitat, and it has been widely extended. In this paper, we present results from exploratory data analysis of this model. To obtain some of the statistics, we formulate the broken stick model as a probability distribution function based on the same model, and we provide an expression for the cumulative distribution function, which is needed to obtain the results from exploratory data analysis. The inequalities we present are useful in ecological studies that apply broken stick models. These results are also useful for testing the goodness of fit of the broken stick model as an alternative to the chi square test, which has often been the main test used. Therefore, these results may be used in several alternative and complementary ways for testing the goodness of fit of the broken stick model.  相似文献   

20.
For right-censored data, the accelerated failure time (AFT) model is an alternative to the commonly used proportional hazards regression model. It is a linear model for the (log-transformed) outcome of interest, and is particularly useful for censored outcomes that are not time-to-event, such as laboratory measurements. We provide a general and easily computable definition of the R2 measure of explained variation under the AFT model for right-censored data. We study its behavior under different censoring scenarios and under different error distributions; in particular, we also study its robustness when the parametric error distribution is misspecified. Based on Monte Carlo investigation results, we recommend the log-normal distribution as a robust error distribution to be used in practice for the parametric AFT model, when the R2 measure is of interest. We apply our methodology to an alcohol consumption during pregnancy data set from Ukraine.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号