共查询到20条相似文献,搜索用时 22 毫秒
1.
2.
Subject dropout is an inevitable problem in longitudinal studies. It makes the analysis challenging when the main interest is the change in outcome from baseline to endpoint of study. The last observation carried forward (LOCF) method is a very common approach for handling this problem. It assumes that the last measured outcome is frozen in time after the point of dropout, an unrealistic assumption given any time trends. Though existence and direction of the bias can sometimes be anticipated, the more important statistical question involves the actual magnitude of the bias and this requires computation. This paper provides explicit expressions for the exact bias in the LOCF estimates of mean change and its variance when the longitudinal data follow a linear mixed-effects model with linear time trajectories. General dropout patterns are considered that may depend on treatment group, subject-specific trajectories and follow different time to dropout distributions. In our case studies, the magnitude of bias for mean change estimators linearly increases as time to dropout decreases. The bias depends heavily on the dropout interval. The variance term is always underestimated. 相似文献
3.
Serkan Eryilmaz 《统计学通讯:理论与方法》2013,42(24):7399-7405
ABSTRACTIn this article, we obtain exact expression for the distribution of the time to failure of discrete time cold standby repairable system under the classical assumptions that both working time and repair time of components are geometric. Our method is based on alternative representation of lifetime as a waiting time random variable on a binary sequence, and combinatorial arguments. Such an exact expression for the time to failure distribution is new in the literature. Furthermore, we obtain the probability generating function and the first two moments of the lifetime random variable. 相似文献
4.
This article considers the adaptive elastic net estimator for regularized mean regression from a Bayesian perspective. Representing the Laplace distribution as a mixture of Bartlett–Fejer kernels with a Gamma mixing density, a Gibbs sampling algorithm for the adaptive elastic net is developed. By introducing slice variables, it is shown that the mixture representation provides a Gibbs sampler that can be accomplished by sampling from either truncated normal or truncated Gamma distribution. The proposed method is illustrated using several simulation studies and analyzing a real dataset. Both simulation studies and real data analysis indicate that the proposed approach performs well. 相似文献
5.
Jiamao Zhang 《统计学通讯:理论与方法》2018,47(23):5779-5794
In this article, a new robust variable selection approach is introduced by combining the robust generalized estimating equations and adaptive LASSO penalty function for longitudinal generalized linear models. Then, an efficient weighted Gaussian pseudo-likelihood version of the BIC (WGBIC) is proposed to choose the tuning parameter in the process of robust variable selection and to select the best working correlation structure simultaneously. Meanwhile, the oracle properties of the proposed robust variable selection method are established and an efficient algorithm combining the iterative weighted least squares and minorization–maximization is proposed to implement robust variable selection and parameter estimation. 相似文献
6.
Eduardo F. Mendes Christopher K. Carter David Gunawan Robert Kohn 《Statistics and Computing》2020,30(4):783-798
Particle Markov Chain Monte Carlo methods are used to carry out inference in nonlinear and non-Gaussian state space models, where the posterior density of the states is approximated using particles. Current approaches usually perform Bayesian inference using either a particle marginal Metropolis–Hastings (PMMH) algorithm or a particle Gibbs (PG) sampler. This paper shows how the two ways of generating variables mentioned above can be combined in a flexible manner to give sampling schemes that converge to a desired target distribution. The advantage of our approach is that the sampling scheme can be tailored to obtain good results for different applications. For example, when some parameters and the states are highly correlated, such parameters can be generated using PMMH, while all other parameters are generated using PG because it is easier to obtain good proposals for the parameters within the PG framework. We derive some convergence properties of our sampling scheme and also investigate its performance empirically by applying it to univariate and multivariate stochastic volatility models and comparing it to other PMCMC methods proposed in the literature. 相似文献
7.
The topic of this paper was prompted by a study for which one of us was the statistician. It was submitted to Annals of Internal Medicine. The paper had positive reviewer comment; however, the statistical reviewer stated that for the analysis to be acceptable for publication, the missing data had to be accounted for in the analysis through the use of baseline in a last observation carried forward imputation. We discuss the issues associated with this form of imputation and recommend that it should not be undertaken as a primary analysis. 相似文献
8.
The traditional mixture model assumes that a dataset is composed of several populations of Gaussian distributions. In real life, however, data often do not fit the restrictions of normality very well. It is likely that data from a single population exhibiting either asymmetrical or heavy-tail behavior could be erroneously modeled as two populations, resulting in suboptimal decisions. To avoid these pitfalls, we generalize the mixture model using adaptive kernel density estimators. Because kernel density estimators enforce no functional form, we can adapt to non-normal asymmetric, kurtotic, and tail characteristics in each population independently. This, in effect, robustifies mixture modeling. We adapt two computational algorithms, genetic algorithm with regularized Mahalanobis distance and genetic expectation maximization algorithm, to optimize the kernel mixture model (KMM) and use results from robust estimation theory in order to data-adaptively regularize both. Finally, we likewise extend the information criterion ICOMP to score the KMM. We use these tools to simultaneously select the best mixture model and classify all observations without making any subjective decisions. The performance of the KMM is demonstrated on two medical datasets; in both cases, we recover the clinically determined group structure and substantially improve patient classification rates over the Gaussian mixture model. 相似文献
9.
In this study, we investigate linear regression having both heteroskedasticity and collinearity problems. We discuss the properties related to the perturbation method. Important observations are summarized as theorems. We then prove the main result that states the heteroskedasticity-robust variances can be improved and that the resulting bias is minimized by using the matrix perturbation method. We analyze a practical example for validation of the method. 相似文献
10.
Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is intractable, or expensive to calculate. To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of sequential Monte Carlo (SMC) methods has recently been suggested. Most effective SMC algorithms that are currently available for ABC have a computational complexity that is quadratic in the number of Monte Carlo samples (Beaumont et al., Biometrika 86:983?C990, 2009; Peters et al., Technical report, 2008; Toni et al., J.?Roy. Soc. Interface 6:187?C202, 2009) and require the careful choice of simulation parameters. In this article an adaptive SMC algorithm is proposed which admits a computational complexity that is linear in the number of samples and adaptively determines the simulation parameters. We demonstrate our algorithm on a toy example and on a birth-death-mutation model arising in epidemiology. 相似文献
11.
12.
Dropout is a persistent problem for a longitudinal study. We exhibit the shortcomings of the last observation carried forward method. It produces biased estimates of change in an outcome from baseline to study endpoint under informative dropout. We developed a theoretical quantification of the effect of such bias on type I and type II error rates. We present results for a setup where a subject either completes the study or drops out during one particular interval, and also under the setup in which subjects could drop out at any time during the study. The type I error rate steadily increases when time to dropout decreases or the common sample size increases. The inflation in type I error rate can be substantially high when reasons for dropout in the two groups differ; when there is a large difference in dropout rates between the control and treatment groups and when the common sample size is large; even when dropout subjects have one or two fewer observations than the completers. Similar results are also observed for type II error rates. A study can have very low power when early recovered patients in the treatment group and worsening patients in the control group drop out even near the end of the study. 相似文献
13.
Trimming principles play an important role in robust statistics. However, their use for clustering typically requires some preliminary information about the contamination rate and the number of groups. We suggest a fresh approach to trimming that does not rely on this knowledge and that proves to be particularly suited for solving problems in robust cluster analysis. Our approach replaces the original K‐population (robust) estimation problem with K distinct one‐population steps, which take advantage of the good breakdown properties of trimmed estimators when the trimming level exceeds the usual bound of 0.5. In this setting, we prove that exact affine equivariance is lost on one hand but, on the other hand, an arbitrarily high breakdown point can be achieved by “anchoring” the robust estimator. We also support the use of adaptive trimming schemes, in order to infer the contamination rate from the data. A further bonus of our methodology is its ability to provide a reliable choice of the usually unknown number of groups. 相似文献
14.
《Journal of statistical planning and inference》2006,136(9):2936-2960
Kernel smoothing methods are widely used in many research areas in statistics. However, kernel estimators suffer from boundary effects when the support of the function to be estimated has finite endpoints. Boundary effects seriously affect the overall performance of the estimator. In this article, we propose a new method of boundary correction for univariate kernel density estimation. Our technique is based on a data transformation that depends on the point of estimation. The proposed method possesses desirable properties such as local adaptivity and non-negativity. Furthermore, unlike many other transformation methods available, the proposed estimator is easy to implement. In a Monte Carlo study, the accuracy of the proposed estimator is numerically analyzed and compared with the existing methods of boundary correction. We find that it performs well for most shapes of densities. The theory behind the new methodology, along with the bias and variance of the proposed estimator, are presented. Results of a data analysis are also given. 相似文献
15.
A novel approach to quantile estimation in multivariate linear regression models with change-points is proposed: the change-point detection and the model estimation are both performed automatically, by adopting either the quantile-fused penalty or the adaptive version of the quantile-fused penalty. These two methods combine the idea of the check function used for the quantile estimation and the L1 penalization principle known from the signal processing and, unlike some standard approaches, the presented methods go beyond typical assumptions usually required for the model errors, such as sub-Gaussian or normal distribution. They can effectively handle heavy-tailed random error distributions, and, in general, they offer a more complex view on the data as one can obtain any conditional quantile of the target distribution, not just the conditional mean. The consistency of detection is proved and proper convergence rates for the parameter estimates are derived. The empirical performance is investigated via an extensive comparative simulation study and practical utilization is demonstrated using a real data example. 相似文献
16.
Andrew Montgomery Hartley 《Pharmaceutical statistics》2015,14(6):488-514
Adaptive sample size adjustment (SSA) for clinical trials consists of examining early subsets of on trial data to adjust estimates of sample size requirements. Blinded SSA is often preferred over unblinded SSA because it obviates many logistical complications of the latter and generally introduces less bias. On the other hand, current blinded SSA methods for binary data offer little to no new information about the treatment effect, ignore uncertainties associated with the population treatment proportions, and/or depend on enhanced randomization schemes that risk partial unblinding. I propose an innovative blinded SSA method for use when the primary analysis is a non‐inferiority or superiority test regarding a risk difference. The method incorporates evidence about the treatment effect via the likelihood function of a mixture distribution. I compare the new method with an established one and with the fixed sample size study design, in terms of maximization of an expected utility function. The new method maximizes the expected utility better than do the comparators, under a range of assumptions. I illustrate the use of the proposed method with an example that incorporates a Bayesian hierarchical model. Lastly, I suggest topics for future study regarding the proposed methods. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
17.
The problem of modeling the relationship between a set of covariates and a multivariate response with correlated components often arises in many areas of research such as genetics, psychometrics, signal processing. In the linear regression framework, such task can be addressed using a number of existing methods. In the high-dimensional sparse setting, most of these methods rely on the idea of penalization in order to efficiently estimate the regression matrix. Examples of such methods include the lasso, the group lasso, the adaptive group lasso or the simultaneous variable selection (SVS) method. Crucially, a suitably chosen penalty also allows for an efficient exploitation of the correlation structure within the multivariate response. In this paper we introduce a novel variant of such method called the adaptive SVS, which is closely linked with the adaptive group lasso. Via a simulation study we investigate its performance in the high-dimensional sparse regression setting. We provide a comparison with a number of other popular methods under different scenarios and show that the adaptive SVS is a powerful tool for efficient recovery of signal in such setting. The methods are applied to genetic data. 相似文献
18.
M. Brannigan 《统计学通讯:理论与方法》2013,42(18):1823-1848
A wholly adaptive piecewise polynomial curve fitting procedure is presented. Using an information-theoretic criterion, the num= ber of knots is estimated necessary to give a ‘good’ approximation to the underlying function of the data. The optimal positioning of these knots is achieved by a suitable transformation of the constrained non-linear program 相似文献
19.
Statistics and Computing - We investigate a new sampling scheme aimed at improving the performance of particle filters whenever (a) there is a significant mismatch between the assumed model... 相似文献
20.
Hee-Seok Oh Ta-Hsin Li 《Journal of the Royal Statistical Society. Series B, Statistical methodology》2004,66(1):221-238
Summary. The paper considers the problem of estimating the entire temperature field for every location on the globe from scattered surface air temperatures observed by a network of weather-stations. Classical methods such as spherical harmonics and spherical smoothing splines are not efficient in representing data that have inherent multiscale structures. The paper presents an estimation method that can adapt to the multiscale characteristics of the data. The method is based on a spherical wavelet approach that has recently been developed for a multiscale representation and analysis of scattered data. Spatially adaptive estimators are obtained by coupling the spherical wavelets with different thresholding (selective reconstruction) techniques. These estimators are compared for their spatial adaptability and extrapolation performance by using the surface air temperature data. 相似文献