首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ABSTRACT

In longitudinal studies, subjects may potentially undergo a series of sequentially ordered events. The gap times, which are the times between two serial events, are often the outcome variables of interest. This study considers quantile regression models of gap times for censored serial-event data and adapts a weighted version of the estimating equation for regression coefficients. The resulting estimators are uniformly consistent and asymptotically normal. Extensive simulation studies are presented to evaluate the finite-sample performance of the proposed methods. An analysis of the tumor recurrence data for bladder cancer patients is also provided to illustrate our proposed methods.  相似文献   

2.
Abstract

A convention in designing randomized clinical trials has been to choose sample sizes that yield specified statistical power when testing hypotheses about treatment response. Manski and Tetenov recently critiqued this convention and proposed enrollment of sufficiently many subjects to enable near-optimal treatment choices. This article develops a refined version of that analysis applicable to trials comparing aggressive treatment of patients with surveillance. The need for a refined analysis arises because the earlier work assumed that there is only a primary health outcome of interest, without secondary outcomes. An important aspect of choice between surveillance and aggressive treatment is that the latter may have side effects. One should then consider how the primary outcome and side effects jointly determine patient welfare. This requires new analysis of sample design. As a case study, we reconsider a trial comparing nodal observation and lymph node dissection when treating patients with cutaneous melanoma. Using a statistical power calculation, the investigators assigned 971 patients to dissection and 968 to observation. We conclude that assigning 244 patients to each option would yield findings that enable suitably near-optimal treatment choice. Thus, a much smaller sample size would have sufficed to inform clinical practice.  相似文献   

3.
Abstract

Estimation of average treatment effect is crucial in causal inference for evaluation of treatments or interventions in biostatistics, epidemiology, econometrics, sociology. However, existing estimators require either a propensity score model, an outcome vector model, or both is correctly specified, which is difficult to verify in practice. In this paper, we allow multiple models for both the propensity score models and the outcome models, and then construct a weighting estimator based on observed data by using two-sample empirical likelihood. The resulting estimator is consistent if any one of those multiple models is correctly specified, and thus provides multiple protection on consistency. Moreover, the proposed estimator can attain the semiparametric efficiency bound when one propensity score model and one outcome vector model are correctly specified, without requiring knowledge of which models are correct. Simulations are performed to evaluate the finite sample performance of the proposed estimators. As an application, we analyze the data collected from the AIDS Clinical Trials Group Protocol 175.  相似文献   

4.
ABSTRACT

Background: Many exposures in epidemiological studies have nonlinear effects and the problem is to choose an appropriate functional relationship between such exposures and the outcome. One common approach is to investigate several parametric transformations of the covariate of interest, and to select a posteriori the function that fits the data the best. However, such approach may result in an inflated Type I error. Methods: Through a simulation study, we generated data from Cox's models with different transformations of a single continuous covariate. We investigated the Type I error rate and the power of the likelihood ratio test (LRT) corresponding to three different procedures that considered the same set of parametric dose-response functions. The first unconditional approach did not involve any model selection, while the second conditional approach was based on a posteriori selection of the parametric function. The proposed third approach was similar to the second except that it used a corrected critical value for the LRT to ensure a correct Type I error. Results: The Type I error rate of the second approach was two times higher than the nominal size. For simple monotone dose-response, the corrected test had similar power as the unconditional approach, while for non monotone, dose-response, it had a higher power. A real-life application that focused on the effect of body mass index on the risk of coronary heart disease death, illustrated the advantage of the proposed approach. Conclusion: Our results confirm that a posteriori selecting the functional form of the dose-response induces a Type I error inflation. The corrected procedure, which can be applied in a wide range of situations, may provide a good trade-off between Type I error and power.  相似文献   

5.
Abstract

The β-model is a natural model for characterizing the degree heterogeneity that widely exists in the network data. The estimators of the model parameters in the differentially private β-model with the denoised process have been shown to be consistent and asymptotically normal. In this paper, we show that the moment estimators of the parameters based on the differentially private degree sequence without the denoised process is consistent and asymptotically normal.  相似文献   

6.
ABSTRACT

When data analysts operate within different statistical frameworks (e.g., frequentist versus Bayesian, emphasis on estimation versus emphasis on testing), how does this impact the qualitative conclusions that are drawn for real data? To study this question empirically we selected from the literature two simple scenarios—involving a comparison of two proportions and a Pearson correlation—and asked four teams of statisticians to provide a concise analysis and a qualitative interpretation of the outcome. The results showed considerable overall agreement; nevertheless, this agreement did not appear to diminish the intensity of the subsequent debate over which statistical framework is more appropriate to address the questions at hand.  相似文献   

7.
ABSTRACT

For multivariate regressors, the Nadaraya–Watson regression estimator suffers from the well-known curse of dimensionality. Additive models overcome this drawback. To estimate the additive components, it is usually assumed that we observe all the data. However, in many applied statistical analysis missing data occur. In this paper, we study the effect of missing responses on the additive components estimation. The estimators are based on marginal integration adapted to the missing situation. The proposed estimators turn out to be consistent under mild assumptions. A simulation study allows to compare the behavior of our procedures, under different scenarios.  相似文献   

8.
ABSTRACT

We study partial linear models where the linear covariates are endogenous and cause an over-identified problem. We propose combining the profile principle with local linear approximation and the generalized moment methods (GMM) to estimate the parameters of interest. We show that the profiled GMM estimators are root? n consistent and asymptotically normally distributed. By appropriately choosing the weight matrix, the estimators can attain the efficiency bound. We further consider variable selection by using the moment restrictions imposed on endogenous variables when the dimension of the covariates may be diverging with the sample size, and propose a penalized GMM procedure, which is shown to have the sparsity property. We establish asymptotic normality of the resulting estimators of the nonzero parameters. Simulation studies have been presented to assess the finite-sample performance of the proposed procedure.  相似文献   

9.
Abstract

In some clinical, environmental, or economical studies, researchers are interested in a semi-continuous outcome variable which takes the value zero with a discrete probability and has a continuous distribution for the non-zero values. Due to the measuring mechanism, it is not always possible to fully observe some outcomes, and only an upper bound is recorded. We call this left-censored data and observe only the maximum of the outcome and an independent censoring variable, together with an indicator. In this article, we introduce a mixture semi-parametric regression model. We consider a parametric model to investigate the influence of covariates on the discrete probability of the value zero. For the non-zero part of the outcome, a semi-parametric Cox’s regression model is used to study the conditional hazard function. The different parameters in this mixture model are estimated using a likelihood method. Hereby the infinite dimensional baseline hazard function is estimated by a step function. As results, we show the identifiability and the consistency of the estimators for the different parameters in the model. We study the finite sample behaviour of the estimators through a simulation study and illustrate this model on a practical data example.  相似文献   

10.
ABSTRACT

In this paper, we shall study a homogeneous ergodic, finite state, Markov chain with unknown transition probability matrix. Starting from the well known maximum likelihood estimator of transition probability matrix, we define estimators of reliability and its measurements. Our aim is to show that these estimators are uniformly strongly consistent and converge in distribution to normal random variables. The construction of the confidence intervals for availability, reliability, and failure rates are also given. Finally we shall give a numerical example for illustration and comparing our results with the usual empirical estimator results.  相似文献   

11.
Let X be a discrete time contact process (CP) on ?2, as defined by Durrett and Levin (1994, Stochastic spatial models: a user's guide to ecological applications. Philosophical Transactions of the Royal Society of London Series B, 343, 329–350). We study the estimation of the model based on space-time evolution of X, that is, T + 1 successive observations of X on a finite subset S of sites. We consider the maximum marginal pseudo-likelihood (MPL) estimator and show that, when T→∞, this estimator is consistent and asymptotically normal for a non-vanishing supercritical CP. Numerical studies confirm these theoretical ones.  相似文献   

12.
《Econometric Reviews》2013,32(3):215-228
Abstract

Decisions based on econometric model estimates may not have the expected effect if the model is misspecified. Thus, specification tests should precede any analysis. Bierens' specification test is consistent and has optimality properties against some local alternatives. A shortcoming is that the test statistic is not distribution free, even asymptotically. This makes the test unfeasible. There have been many suggestions to circumvent this problem, including the use of upper bounds for the critical values. However, these suggestions lead to tests that lose power and optimality against local alternatives. In this paper we show that bootstrap methods allow us to recover power and optimality of Bierens' original test. Bootstrap also provides reliable p-values, which have a central role in Fisher's theory of hypothesis testing. The paper also includes a discussion of the properties of the bootstrap Nonlinear Least Squares Estimator under local alternatives.  相似文献   

13.
ABSTRACT

Let {yt } be a Poisson-like process with the mean μ t which is a periodic function of time t. We discuss how to fit this type of data set using quasi-likelihood method. Our method provides a new avenue to fit a time series data when the usual assumption of stationarity and homogeneous residual variances are invalid. We show that the estimators obtained are strongly consistent and also asymptotically normal.  相似文献   

14.
ABSTRACT

New invariant and consistent goodness-of-fit tests for multivariate normality are introduced. Tests are based on the Karhunen–Loève transformation of a multidimensional sample from a population. A comparison of simulated powers of tests and other well-known tests with respect to some alternatives is given. The simulation study demonstrates that power of the proposed McCull test almost does not depend on the number of grouping cells. The test shows an advantage over other chi-squared type tests. However, averaged over all of the simulated conditions examined in this article, the Anderson–Darling type and the Cramer–von Mises type tests seem to be the best.  相似文献   

15.
ABSTRACT

Because of its flexibility and usefulness, Akaike Information Criterion (AIC) has been widely used for clinical data analysis. In general, however, AIC is used without paying much attention to sample size. If sample sizes are not large enough, it is possible that the AIC approach does not lead us to the conclusions which we seek. This article focuses on the sample size determination for AIC approach to clinical data analysis. We consider a situation in which outcome variables are dichotomous and propose a method for sample size determination under this situation. The basic idea is also applicable to the situations in which outcome variables have more than two categories or outcome variables are continuous. We present simulation studies and an application to an actual clinical trial.  相似文献   

16.
ABSTRACT

Cox proportional hazards regression model has been widely used to estimate the effect of a prognostic factor on a time-to-event outcome. In a survey of survival analyses in cancer journals, it was found that only 5% of studies using Cox proportional hazards model attempted to verify the underlying assumption. Usually an estimate of the treatment effect from fitting a Cox model was reported without validation of the proportionality assumption. It is not clear how such an estimate should be interpreted if the proportionality assumption is violated. In this article, we show that the estimate of treatment effect from a Cox regression model can be interpreted as a weighted average of the log-scaled hazard ratio over the duration of study. A hypothetic example is used to explain the weights.  相似文献   

17.
ABSTRACT

This paper considers adaptation of hierarchical models for small area disease counts to detect disease clustering. A high risk area may be an outlier (in local terms) if surrounded by low risk areas, whereas a high risk cluster requires that both the focus area and surrounding areas demonstrate common elevated risk. A local join count method is suggested to detect local clustering of high disease risk in a single health outcome, and extends to assessing bivariate spatial clustering in relative risk. Applications include assessing spatial heterogeneity in effects of area predictors according to local clustering configuration, and gauging sensitivity of bivariate clustering to random effect assumptions.  相似文献   

18.
ABSTRACT

The problem of wavelet density estimation is studied when the sample observations are contaminated with random noise. In this paper a linear wavelet estimator based on Meyer-type wavelets is shown to be strongly consistent when Fourier transform of random noise has polynomial descent or exponential descent.  相似文献   

19.
Abstract

The presence of a maverick judge, one whose rankings differ greatly from the other members of a panel, can result in incorrect rankings and a sense of unfairness among contestants. We develop and explore the properties of a likelihood ratio test, assuming a Mallows type distribution, for the presence of a maverick judge when each judge selects his or her best k out of n objects, k ≤ n. Detection of a maverick judge, who may be viewed as a multivariate outlier, turns out to be very difficult unless the judges are very consistent and there are repeat observations on the panel.  相似文献   

20.
Abstract

The regression model with ordinal outcome has been widely used in a lot of fields because of its significant effect. Moreover, predictors measured with error and multicollinearity are long-standing problems and often occur in regression analysis. However there are not many studies on dealing with measurement error models with generally ordinal response, even fewer when they suffer from multicollinearity. The purpose of this article is to estimate parameters of ordinal probit models with measurement error and multicollinearity. First, we propose to use regression calibration and refined regression calibration to estimate parameters in ordinal probit models with measurement error. Second, we develop new methods to obtain estimators of parameters in the presence of multicollinearity and measurement error in ordinal probit model. Furthermore we also extend all the methods to quadratic ordinal probit models and talk about the situation in ordinal logistic models. These estimators are consistent and asymptotically normally distributed under general conditions. They are easy to compute, perform well and are robust against the normality assumption for the predictor variables in our simulation studies. The proposed methods are applied to some real datasets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号