首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Clinical studies aimed at identifying effective treatments to reduce the risk of disease or death often require long term follow-up of participants in order to observe a sufficient number of events to precisely estimate the treatment effect. In such studies, observing the outcome of interest during follow-up may be difficult and high rates of censoring may be observed which often leads to reduced power when applying straightforward statistical methods developed for time-to-event data. Alternative methods have been proposed to take advantage of auxiliary information that may potentially improve efficiency when estimating marginal survival and improve power when testing for a treatment effect. Recently, Parast et al. (J Am Stat Assoc 109(505):384–394, 2014) proposed a landmark estimation procedure for the estimation of survival and treatment effects in a randomized clinical trial setting and demonstrated that significant gains in efficiency and power could be obtained by incorporating intermediate event information as well as baseline covariates. However, the procedure requires the assumption that the potential outcomes for each individual under treatment and control are independent of treatment group assignment which is unlikely to hold in an observational study setting. In this paper we develop the landmark estimation procedure for use in an observational setting. In particular, we incorporate inverse probability of treatment weights (IPTW) in the landmark estimation procedure to account for selection bias on observed baseline (pretreatment) covariates. We demonstrate that consistent estimates of survival and treatment effects can be obtained by using IPTW and that there is improved efficiency by using auxiliary intermediate event and baseline information. We compare our proposed estimates to those obtained using the Kaplan–Meier estimator, the original landmark estimation procedure, and the IPTW Kaplan–Meier estimator. We illustrate our resulting reduction in bias and gains in efficiency through a simulation study and apply our procedure to an AIDS dataset to examine the effect of previous antiretroviral therapy on survival.  相似文献   

2.
Bartlett correction constitutes one of the attractive features of empirical likelihood because it enables the construction of confidence regions for parameters with improved coverage probabilities. We study the Bartlett correction of spatial frequency domain empirical likelihood (SFDEL) based on general spectral estimating functions for regularly spaced spatial data. This general formulation can be applied to testing and estimation problems in spatial analysis, for example testing covariance isotropy, testing covariance separability as well as estimating the parameters of spatial covariance models. We show that the SFDEL is Bartlett correctable. In particular, the improvement in coverage accuracies of the Bartlett‐corrected confidence regions depends on the underlying spatial structures. The Canadian Journal of Statistics 47: 455–472; 2019 © 2019 Statistical Society of Canada  相似文献   

3.
We address the issue of performing inference on the parameters that index the modified extended Weibull (MEW) distribution. We show that numerical maximization of the MEW log-likelihood function can be problematic. It is even possible to encounter maximum likelihood estimates that are not finite, that is, it is possible to encounter monotonic likelihood functions. We consider different penalization schemes to improve maximum likelihood point estimation. A penalization scheme based on the Jeffreys’ invariant prior is shown to be particularly useful. Simulation results on point estimation, interval estimation, and hypothesis testing inference are presented. Two empirical applications are presented and discussed.  相似文献   

4.
The stochastic volatility model has no closed form for its likelihood and hence the maximum likelihood estimation method is difficult to implement. However, it can be shown that the model has a known characteristic function. As a consequence, the model is estimable via the empirical characteristic function. In this paper, the characteristic function of the model is derived and the estimation procedure is discussed. An application is considered for daily returns of Australian/New Zealand dollar exchange rate. Model checking suggests that the stochastic volatility model together with the empirical characteristic function estimates fit the data well.  相似文献   

5.
This paper describes an application of small area estimation (SAE) techniques under area-level spatial random effect models when only area (or district or aggregated) level data are available. In particular, the SAE approach is applied to produce district-level model-based estimates of crop yield for paddy in the state of Uttar Pradesh in India using the data on crop-cutting experiments supervised under the Improvement of Crop Statistics scheme and the secondary data from the Population Census. The diagnostic measures are illustrated to examine the model assumptions as well as reliability and validity of the generated model-based small area estimates. The results show a considerable gain in precision in model-based estimates produced applying SAE. Furthermore, the model-based estimates obtained by exploiting spatial information are more efficient than the one obtained by ignoring this information. However, both of these model-based estimates are more efficient than the direct survey estimate. In many districts, there is no survey data and therefore it is not possible to produce direct survey estimates for these districts. The model-based estimates generated using SAE are still reliable for such districts. These estimates produced by using SAE will provide invaluable information to policy-analysts and decision-makers.  相似文献   

6.
Abstract

Inferential methods based on ranks present robust and powerful alternative methodology for testing and estimation. In this article, two objectives are followed. First, develop a general method of simultaneous confidence intervals based on the rank estimates of the parameters of a general linear model and derive the asymptotic distribution of the pivotal quantity. Second, extend the method to high dimensional data such as gene expression data for which the usual large sample approximation does not apply. It is common in practice to use the asymptotic distribution to make inference for small samples. The empirical investigation in this article shows that for methods based on the rank-estimates, this approach does not produce a viable inference and should be avoided. A method based on the bootstrap is outlined and it is shown to provide a reliable and accurate method of constructing simultaneous confidence intervals based on rank estimates. In particular it is shown that commonly applied methods of normal or t-approximation are not satisfactory, particularly for large-scale inferences. Methods based on ranks are uniquely suitable for analysis of microarray gene expression data since they often involve large scale inferences based on small samples containing a large number of outliers and violate the assumption of normality. A real microarray data is analyzed using the rank-estimate simultaneous confidence intervals. Viability of the proposed method is assessed through a Monte Carlo simulation study under varied assumptions.  相似文献   

7.
The data that are used in constructing empirical Bayes estimates can properly be regarded as arising in a two-stage sampling scheme. In this setting it is possible to modify the conventional parameter estimates so that a reduction in expected squared error is effected. In the empirical Bayes approach this is done through the use of Bayes's theorem. The alternative approach proposed in this paper specifies a class of modified estimates and then seeks to identify that member of the class which yields the minimum squared error. One advantage of this approach relative to the empirical Bayes approach is that certain problems involving multiple parameters are easily overcome. Further, it permits the use of relatively efficient methods of non-parametric estimation, such as those based on quantiles or ranks; this has not been achieved by empirical Bayes methods.  相似文献   

8.
Empirical Bayes (EB) estimates in general linear mixed models are useful for the small area estimation in the sense of increasing precision of estimation of small area means. However, one potential difficulty of EB is that the overall estimate for a larger geographical area based on a (weighted) sum of EB estimates is not necessarily identical to the corresponding direct estimate such as the overall sample mean. Another difficulty is that EB estimates yield over‐shrinking, which results in the sampling variance smaller than the posterior variance. One way to fix these problems is the benchmarking approach based on the constrained empirical Bayes (CEB) estimators, which satisfy the constraints that the aggregated mean and variance are identical to the requested values of mean and variance. In this paper, we treat the general mixed models, derive asymptotic approximations of the mean squared error (MSE) of CEB and provide second‐order unbiased estimators of MSE based on the parametric bootstrap method. These results are applied to natural exponential families with quadratic variance functions. As a specific example, the Poisson‐gamma model is dealt with, and it is illustrated that the CEB estimates and their MSE estimates work well through real mortality data.  相似文献   

9.
In testing, item response theory models are widely used in order to estimate item parameters and individual abilities. However, even unidimensional models require a considerable sample size so that all parameters can be estimated precisely. The introduction of empirical prior information about candidates and items might reduce the number of candidates needed for parameter estimation. Using data for IQ measurement, this work shows how empirical information about items can be used effectively for item calibration and in adaptive testing. First, we propose multivariate regression trees to predict the item parameters based on a set of covariates related to the item-solving process. Afterwards, we compare the item parameter estimation when tree-fitted values are included in the estimation or when they are ignored. Model estimation is fully Bayesian, and is conducted via Markov chain Monte Carlo methods. The results are two-fold: (a) in item calibration, it is shown that the introduction of prior information is effective with short test lengths and small sample sizes and (b) in adaptive testing, it is demonstrated that the use of the tree-fitted values instead of the estimated parameters leads to a moderate increase in the test length, but provides a considerable saving of resources.  相似文献   

10.
In this paper we discuss the recursive (or on line) estimation in (i) regression and (ii) autoregressive integrated moving average (ARIMA) time series models. The adopted approach uses Kalman filtering techniques to calculate estimates recursively. This approach is used for the estimation of constant as well as time varying parameters. In the first section of the paper we consider the linear regression model. We discuss recursive estimation both for constant and time varying parameters. For constant parameters, Kalman filtering specializes to recursive least squares. In general, we allow the parameters to vary according to an autoregressive integrated moving average process and update the parameter estimates recursively. Since the stochastic model for the parameter changes will "be rarely known, simplifying assumptions have to be made. In particular we assume a random walk model for the time varying parameters and show how to determine whether the parameters are changing over time. This is illustrated with an example.  相似文献   

11.
Summary.  Statistical agencies make changes to the data collection methodology of their surveys to improve the quality of the data collected or to improve the efficiency with which they are collected. For reasons of cost it may not be possible to estimate the effect of such a change on survey estimates or response rates reliably, without conducting an experiment that is embedded in the survey which involves enumerating some respondents by using the new method and some under the existing method. Embedded experiments are often designed for repeated and overlapping surveys; however, previous methods use sample data from only one occasion. The paper focuses on estimating the effect of a methodological change on estimates in the case of repeated surveys with overlapping samples from several occasions. Efficient design of an embedded experiment that covers more than one time point is also mentioned. All inference is unbiased over an assumed measurement model, the experimental design and the complex sample design. Other benefits of the approach proposed include the following: it exploits the correlation between the samples on each occasion to improve estimates of treatment effects; treatment effects are allowed to vary over time; it is robust against incorrectly rejecting the null hypothesis of no treatment effect; it allows a wide set of alternative experimental designs. This paper applies the methodology proposed to the Australian Labour Force Survey to measure the effect of replacing pen-and-paper interviewing with computer-assisted interviewing. This application considered alternative experimental designs in terms of their statistical efficiency and their risks to maintaining a consistent series. The approach proposed is significantly more efficient than using only 1 month of sample data in estimation.  相似文献   

12.
This paper discusses regression analysis of clustered current status data under semiparametric additive hazards models. In particular, we consider the situation when cluster sizes can be informative about correlated failure times from the same cluster. To address the problem, we present estimating equation-based estimation procedures and establish asymptotic properties of the resulting estimates. Finite sample performance of the proposed method is assessed through an extensive simulation study, which indicates the procedure works well. The method is applied to a motivating data set from a lung tumorigenicity study.  相似文献   

13.
贾婧等 《统计研究》2018,35(11):116-128
资产收益率时变高阶矩建模的首要前提是资产收益率的偏度和峰度具有时变性,即资产收益率存在类似于异方差性的异偏度和异峰度特征。目前文献中的时变偏度和时变峰度识别检验存在适用性较差且检验功效较低等不足。本文提出基于回归的检验方法识别资产收益率偏度和峰度的时变性。该检验一方面利用概率积分变换缓解了拉格朗日乘数检验对资产收益率序列的高阶矩存在性的限制,另一方面考虑了检验统计量中参数估计的不确定性对其统计性质的影响,具有良好的渐近统计性质且适用性更广。蒙特卡洛模拟表明该检验具有良好的有限样本性质,具有合适的检验水平和较高的检验功效。最后,将基于回归的检验运用于上证综指和深圳成指收益率的时变建模研究中。  相似文献   

14.
An extension of the generalized linear mixed model was constructed to simultaneously accommodate overdispersion and hierarchies present in longitudinal or clustered data. This so‐called combined model includes conjugate random effects at observation level for overdispersion and normal random effects at subject level to handle correlation, respectively. A variety of data types can be handled in this way, using different members of the exponential family. Both maximum likelihood and Bayesian estimation for covariate effects and variance components were proposed. The focus of this paper is the development of an estimation procedure for the two sets of random effects. These are necessary when making predictions for future responses or their associated probabilities. Such (empirical) Bayes estimates will also be helpful in model diagnosis, both when checking the fit of the model as well as when investigating outlying observations. The proposed procedure is applied to three datasets of different outcome types. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
The coefficient of variation (CV) can be used as an index of reliability of measurement. The lognormal distribution has been applied to fit data in many fields. We developed approximate interval estimation of the ratio of two coefficients of variation (CsV) for lognormal distributions by using the Wald-type, Fieller-type, log methods, and method of variance estimates recovery (MOVER). The simulation studies show that empirical coverage rates of the methods are satisfactorily close to a nominal coverage rate for medium sample sizes.  相似文献   

16.
Binary data are often of interest in business surveys, particularly when the aim is to characterize grouping in the businesses making up the survey population. When small area estimates are required for such binary data, use of standard estimation methods based on linear mixed models (LMMs) becomes problematic. We explore two model-based techniques of small area estimation for small area proportions, the empirical best predictor (EBP) under a generalized linear mixed model and the model-based direct estimator (MBDE) under a population-level LMM. Our empirical results show that both the MBDE and the EBP perform well. The EBP is a computationally intensive method, whereas the MBDE is easy to implement. In case of model misspecification, the MBDE also appears to be more robust. The mean-squared error (MSE) estimation of MBDE is simple and straightforward, which is in contrast to the complicated MSE estimation for the EBP.  相似文献   

17.
18.
Longitudinal health-related quality of life data arise naturally from studies of progressive and neurodegenerative diseases. In such studies, patients’ mental and physical conditions are measured over their follow-up periods and the resulting data are often complicated by subject-specific measurement times and possible terminal events associated with outcome variables. Motivated by the “Predictor’s Cohort” study on patients with advanced Alzheimer disease, we propose in this paper a semiparametric modeling approach to longitudinal health-related quality of life data. It builds upon and extends some recent developments for longitudinal data with irregular observation times. The new approach handles possibly dependent terminal events. It allows one to examine time-dependent covariate effects on the evolution of outcome variable and to assess nonparametrically change of outcome measurement that is due to factors not incorporated in the covariates. The usual large-sample properties for parameter estimation are established. In particular, it is shown that relevant parameter estimators are asymptotically normal and the asymptotic variances can be estimated consistently by the simple plug-in method. A general procedure for testing a specific parametric form in the nonparametric component is also developed. Simulation studies show that the proposed approach performs well for practical settings. The method is applied to the motivating example.  相似文献   

19.
This paper presents a procedure utilizing the generalized maximum entropy (GME) estimation method in two steps to quantify the uncertainty of the simple linear structural measurement error model parameters exactly. The first step estimates the unknowns from the horizontal line, and then the estimates were used in a second step to estimate the unknowns from the vertical line. The proposed estimation procedure has the ability to minimize the number of unknown parameters in formulating the GME system within each step, and hence reduce variability of the estimates. Analytical and illustrative Monte Carlo simulation comparison experiments with the maximum likelihood estimators and a one-step GME estimation procedure were presented. Simulation experiments demonstrated that the two steps estimation procedure produced parameter estimates that are more accurate and more efficient than the classical estimation methods. An application of the proposed method is illustrated using a data set gathered from the Centre for Integrated Government Services in Delma Island – UAE to predict the association between perceived quality and the customer satisfaction.  相似文献   

20.
Optimal designs for copula models   总被引:1,自引:0,他引:1  
E. Perrone 《Statistics》2016,50(4):917-929
Copula modelling has in the past decade become a standard tool in many areas of applied statistics. However, a largely neglected aspect concerns the design of related experiments. Particularly the issue of whether the estimation of copula parameters can be enhanced by optimizing experimental conditions and how robust all the parameter estimates for the model are with respect to the type of copula employed. In this paper an equivalence theorem for (bivariate) copula models is provided that allows formulation of efficient design algorithms and quick checks of whether designs are optimal or at least efficient. Some examples illustrate that in practical situations considerable gains in design efficiency can be achieved. A natural comparison between different copula models with respect to design efficiency is provided as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号