首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 447 毫秒
1.
In comparison to other experimental studies, multicollinearity appears frequently in mixture experiments, a special study area of response surface methodology, due to the constraints on the components composing the mixture. In the analysis of mixture experiments by using a special generalized linear model, logistic regression model, multicollinearity causes precision problems in the maximum-likelihood logistic regression estimate. Therefore, effects due to multicollinearity can be reduced to a certain extent by using alternative approaches. One of these approaches is to use biased estimators for the estimation of the coefficients. In this paper, we suggest the use of logistic ridge regression (RR) estimator in the cases where there is multicollinearity during the analysis of mixture experiments using logistic regression. Also, for the selection of the biasing parameter, we use fraction of design space plots for evaluating the effect of the logistic RR estimator with respect to the scaled mean squared error of prediction. The suggested graphical approaches are illustrated on the tumor incidence data set.  相似文献   

2.
The problem of building bootstrap confidence intervals for small probabilities with count data is addressed. The law of the independent observations is assumed to be a mixture of a given family of power series distributions. The mixing distribution is estimated by nonparametric maximum likelihood and the corresponding mixture is used for resampling. We build percentile-t and Efron percentile bootstrap confidence intervals for the probabilities and we prove their consistency in probability. The new theoretical results are supported by simulation experiments for Poisson and geometric mixtures. We compare percentile-t and Efron percentile bootstrap intervals with eight other bootstrap or asymptotic theory based intervals. It appears that Efron percentile bootstrap intervals outperform the competitors in terms of coverage probability and length.  相似文献   

3.
Based on the large-sample normal distribution of the sample log odds ratio and its asymptotic variance from maximum likelihood logistic regression, shortest 95% confidence intervals for the odds ratio are developed. Although the usual confidence interval on the odds ratio is unbiased, the shortest interval is not. That is, while covering the true odds ratio with the stated probability, the shortest interval covers some values below the true odds ratio with higher probability. The upper and lower limits of the shortest interval are shifted to the left of those of the usual interval, with greater shifts in the upper limits. With the log odds model γ + , in which X is binary, simulation studies showed that the approximate average percent difference in length is 7.4% for n (sample size) = 100, and 3.8% for n = 200. Precise estimates of the covering probabilities of the two types of intervals were obtained from simulation studies, and are compared graphically. For odds ratio estimates greater (less) than one, shortest intervals are more (less) likely to include one than are the usual intervals. The usual intervals are likelihood-based and the shortest intervals are not. The usual intervals have minimum expected length among the class of unbiased intervals. Shortest intervals do not provide important advantages over the usual intervals, which we recommend for practical use.  相似文献   

4.
Logistic regression using conditional maximum likelihood estimation has recently gained widespread use. Many of the applications of logistic regression have been in situations in which the independent variables are collinear. It is shown that collinearity among the independent variables seriously effects the conditional maximum likelihood estimator in that the variance of this estimator is inflated in much the same way that collinearity inflates the variance of the least squares estimator in multiple regression. Drawing on the similarities between multiple and logistic regression several alternative estimators, which reduce the effect of the collinearity and are easy to obtain in practice, are suggested and compared in a simulation study.  相似文献   

5.
Statistical inferences for probability distributions involving truncation parameters have received recent attention in the literature. One aspect of these inferences is the question of shortest confidence intervals for parameters or parametric functions of these models. The topic is a classical one, and the approach follows the usual theory. In all literature treatments the authors consider specific models and derive confidence intervals (not necessarily shortest). All of these models can, however, be considered as special cases of a more general one. The use of this model enables one to obtain easily shortest confidence intervals and unify the different approaches. In addition, it provides a useful technique for classroom presentation of the topic.  相似文献   

6.
We examine the rationale of prospective logistic regression analysis for pair-matched case-control data using explicit, parametric terms for matching variables in the model. We show that this approach can yield inconsistent estimates for the disease-exposure odds ratio, even in large samples. Some special conditions are given under which the bias for the disease-exposure odds ratio is small. It is because these conditions are not too uncommon that this flawed analytic method appears to possess an (unreasonable) effectiveness.  相似文献   

7.
We develop an approach to evaluating frequentist model averaging procedures by considering them in a simple situation in which there are two‐nested linear regression models over which we average. We introduce a general class of model averaged confidence intervals, obtain exact expressions for the coverage and the scaled expected length of the intervals, and use these to compute these quantities for the model averaged profile likelihood (MPI) and model‐averaged tail area confidence intervals proposed by D. Fletcher and D. Turek. We show that the MPI confidence intervals can perform more poorly than the standard confidence interval used after model selection but ignoring the model selection process. The model‐averaged tail area confidence intervals perform better than the MPI and postmodel‐selection confidence intervals but, for the examples that we consider, offer little over simply using the standard confidence interval for θ under the full model, with the same nominal coverage.  相似文献   

8.
This article proposes a mixture double autoregressive model by introducing the flexibility of mixture models to the double autoregressive model, a novel conditional heteroscedastic model recently proposed in the literature. To make it more flexible, the mixing proportions are further assumed to be time varying, and probabilistic properties including strict stationarity and higher order moments are derived. Inference tools including the maximum likelihood estimation, an expectation–maximization (EM) algorithm for searching the estimator and an information criterion for model selection are carefully studied for the logistic mixture double autoregressive model, which has two components and is encountered more frequently in practice. Monte Carlo experiments give further support to the new models, and the analysis of an empirical example is also reported.  相似文献   

9.
An explicit form of confidence intervals for the treatment effect in random effects meta-analysis model obtained from Harville–Jeske–Kenward–Roger approach is given. These restricted likelihood based intervals are compared to alternative procedures commonly used in collaborative studies when the number of participants is small and study-specific variances are heterogeneous. Monte Carlo simulation experiments show that the former intervals have quite conservative coverage probabilities and favor the latter intervals.  相似文献   

10.
Many credit risk models are based on the selection of a single logistic regression model, on which to base parameter estimation. When many competing models are available, and without enough guidance from economical theory, model averaging represents an appealing alternative to the selection of single models. Despite model averaging approaches have been present in statistics for many years, only recently they are starting to receive attention in economics and finance applications. This contribution shows how Bayesian model averaging can be applied to credit risk estimation, a research area that has received a great deal of attention recently, especially in the light of the global financial crisis of the last few years and the correlated attempts to regulate international finance. The paper considers the use of logistic regression models under the Bayesian Model Averaging paradigm. We argue that Bayesian model averaging is not only more correct from a theoretical viewpoint, but also slightly superior, in terms of predictive performance, with respect to single selected models.  相似文献   

11.
The good performance of logit confidence intervals for the odds ratio with small samples is well known. This is true unless the actual odds ratio is very large. In single capture–recapture estimation the odds ratio is equal to 1 because of the assumption of independence of the samples. Consequently, a transformation of the logit confidence intervals for the odds ratio is proposed in order to estimate the size of a closed population under single capture–recapture estimation. It is found that the transformed logit interval, after adding .5 to each observed count before computation, has actual coverage probabilities near to the nominal level even for small populations and even for capture probabilities near to 0 or 1, which is not guaranteed for the other capture–recapture confidence intervals proposed in statistical literature. Thus, given that the .5 transformed logit interval is very simple to compute and has a good performance, it is appropriate to be implemented by most users of the single capture–recapture method.  相似文献   

12.
This article investigates the performance of two jackknife techniques under an asymptotic model in which the number of 2 × 2 tables increases but the possible marginal configurations remain fixed. These approaches are applied to the Mantel–Haenszel estimator, or transformed versions of this estimator, respectively. The resulting jackknife estimators are shown to be consistent for the common odds ratio. Their asymptotic distributions are derived; they can be used for constructing appropriate sparse-data confidence intervals.  相似文献   

13.
本文对独立逆抽样设计下优势比的置信区间的构造进行了研究,包括三个已有的方法,以及本文引入的鞍点逼近方法。通过模拟比较了这四个方法给出的置信区间。模拟结果表明,基于鞍点逼近方法给出的置信区间不比另外三种方法差。并且在一些情况下表现还优于其它三个方法。  相似文献   

14.
The logistic regression model has been widely used in the social and natural sciences and results from studies using this model can have significant policy impacts. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this article is to examine the impact of alternative data sets on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model with observational data. A number of simulations are conducted examining the impact of sample size, nonlinear predictors, and multicollinearity on substantive inferences (e.g. odds ratios, marginal effects) when using logistic regression models. Findings suggest that small sample size can negatively affect the quality of parameter estimates and inferences in the presence of rare events, multicollinearity, and nonlinear predictor functions, but marginal effects estimates are relatively more robust to sample size.  相似文献   

15.
This paper presents an extension of mean-squared forecast error (MSFE) model averaging for integrating linear regression models computed on data frames of various lengths. Proposed method is considered to be a preferable alternative to best model selection by various efficiency criteria such as Bayesian information criterion (BIC), Akaike information criterion (AIC), F-statistics and mean-squared error (MSE) as well as to Bayesian model averaging (BMA) and naïve simple forecast average. The method is developed to deal with possibly non-nested models having different number of observations and selects forecast weights by minimizing the unbiased estimator of MSFE. Proposed method also yields forecast confidence intervals with a given significance level what is not possible when applying other model averaging methods. In addition, out-of-sample simulation and empirical testing proves efficiency of such kind of averaging when forecasting economic processes.  相似文献   

16.
Edgeworth expansions are derived for conditional distributions of sufficient statistics as well as conditional maximum likelihood estimators of log odds ratios in logistic regression models assuming that the risk factors are not almost equally distanced. Expansions are given in several special cases. Similar results are obtained for models with polytomous outcomes.  相似文献   

17.
In many applications of linear regression models, randomness due to model selection is commonly ignored in post-model selection inference. In order to account for the model selection uncertainty, least-squares frequentist model averaging has been proposed recently. We show that the confidence interval from model averaging is asymptotically equivalent to the confidence interval from the full model. The finite-sample confidence intervals based on approximations to the asymptotic distributions are also equivalent if the parameter of interest is a linear function of the regression coefficients. Furthermore, we demonstrate that this equivalence also holds for prediction intervals constructed in the same fashion.  相似文献   

18.
In an attempt to apply robust procedures, conventional t-tables are used to approximate critical values of a Studentized t-statistic which is formed from the ratio of a trimmed mean to the square root of a suitably normed Winsorized sum of squared deviations. It is shown here that the approximation is poor if the proportion of trimming is chosen to depend on the data. Instead a data dependent alternative is given which uses adaptive trimming proportions and confidence intervals based on trimmed likelihood statistics. Resulting statistics have high efficiency at the normal model, proper coverage for confidence intervals, yet retain breakdown point one half. Average lengths of confidence intervals are competitive with those of recent Studentized confidence intervals based on the biweight over a range of underlying distributions. In addition, the adaptive trimming is used to identify potential outliers. Evidence in the form of simulations and data analysis support the new adaptive trimming approach.  相似文献   

19.
Summary.  In many electrophysiological experiments the main objectives include estimation of the firing rate of a single neuron, as well as a comparison of its temporal evolution across different experimental conditions. To accomplish these two goals, we propose a flexible approach based on the logistic generalized additive model including condition-by-time interactions. If an interaction of this type is detected in the model, we then establish that the use of the temporal odds ratio curves is very useful in discriminating between the conditions under which the firing probability is higher. Bootstrap techniques are used for testing for interactions and constructing pointwise confidence bands for the true odds ratio curves. Finally, we apply the new methodology to assessing relationships between neural response and decision-making in movement-selective neurons in the prefrontal cortex of behaving monkeys.  相似文献   

20.
This paper considers the analysis of time to event data in the presence of collinearity between covariates. In linear and logistic regression models, the ridge regression estimator has been applied as an alternative to the maximum likelihood estimator in the presence of collinearity. The advantage of the ridge regression estimator over the usual maximum likelihood estimator is that the former often has a smaller total mean square error and is thus more precise. In this paper, we generalized this approach for addressing collinearity to the Cox proportional hazards model. Simulation studies were conducted to evaluate the performance of the ridge regression estimator. Our approach was motivated by an occupational radiation study conducted at Oak Ridge National Laboratory to evaluate health risks associated with occupational radiation exposure in which the exposure tends to be correlated with possible confounders such as years of exposure and attained age. We applied the proposed methods to this study to evaluate the association of radiation exposure with all-cause mortality.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号