首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Regression models for discrete responses have found numerous applications. We consider logit, probit and cumulative logit models for qualitative data, and the loglinear and linear Poisson model for counted data. Statistical analysis of these models relies heavily on asymptotic likelihood theory, i.e. asymptotic properties of the maximum likelihood estimator and the likelihood ratio as well as related test statistics. In practical situations, previously published conditions assuring these properties may be too strong, or it is difficult to see whether they apply. This paper contributes to a clarification of this point and characterizes to some extent situations where asymptotic theory is applicable and where it is not. In particular, sharp upper bounds on the admissible growth of regressors are given.  相似文献   

2.
3.
4.
The Multiple Comparison Procedures with Modeling Techniques (MCP-Mod) framework has been recently approved by the U.S. Food, Administration, and European Medicines Agency as fit-for-purpose for phase II studies. Nonetheless, this approach relies on the asymptotic properties of Maximum Likelihood (ML) estimators, which might not be reasonable for small sample sizes. In this paper, we derived improved ML estimators and correction for their covariance matrices in the censored Weibull regression model based on the corrective and preventive approaches. We performed two simulation studies to evaluate ML and improved ML estimators with their covariance matrices in (i) a regression framework (ii) the Multiple Comparison Procedures with Modeling Techniques framework. We have shown that improved ML estimators are less biased than ML estimators yielding Wald-type statistics that controls type I error without loss of power in both frameworks. Therefore, we recommend the use of improved ML estimators in the MCP-Mod approach to control type I error at nominal value for sample sizes ranging from 5 to 25 subjects per dose.  相似文献   

5.
Real world problems are embedded with uncertainties. Therefore, to tackle these problems, one must consider probabilistic nature of the problems both in modeling and solution. In this work, concepts of convergence of the solution of variational inequality in classical functional analysis are extended to a stochastic domain for a random Mann-type iterative and Ishikawa-type iterative schemes in a Banach space. A mean square convergence result is proved for this extension.  相似文献   

6.
Summary.  Every year since 1928, the Academy of Motion Picture Arts and Sciences has recognized outstanding achievement in film with their prestigious Academy Award, or Oscar. Before the winners in various categories are announced, there is intense media and public interest in predicting who will come away from the awards ceremony with an Oscar statuette. There are no end of theories about which nominees are most likely to win, yet despite this there continue to be major surprises when the winners are announced. The paper frames the question of predicting the four major awards—picture, director, actor in a leading role and actress in a leading role—as a discrete choice problem. It is then possible to predict the winners in these four categories with a reasonable degree of success. The analysis also reveals which past results might be considered truly surprising—nominees with low estimated probability of winning who have overcome nominees who were strongly favoured to win.  相似文献   

7.
For certain mixture models, improper priors are undesirable because they yield improper posteriors. However, proper priors may be undesirable because they require subjective input. We propose the use of specially chosen data-dependent priors. We show that, in some cases, data-dependent priors are the only priors that produce intervals with second-order correct frequentist coverage. The resulting posterior also has another interpretation: it is the product of a fixed prior and a pseudolikelihood.  相似文献   

8.
In this paper we consider the discrete middle censoring where lifetime, lower bound and length of censoring interval are variables with geometric distribution. We obtain the likelihood function of observed data and derive the MLE of the unknown parameter using EM algorithm. Also we obtain the Bayes estimator of the unknown parameter under squared error loss (SEL) function and credible interval of unknown parameter using Monte Carlo methods.  相似文献   

9.
Due to rapid data growth, statistical analysis of massive datasets often has to be carried out in a distributed fashion, either because several datasets stored in separate physical locations are all relevant to a given problem, or simply to achieve faster (parallel) computation through a divide-and-conquer scheme. In both cases, the challenge is to obtain valid inference that does not require processing all data at a single central computing node. We show that for a very widely used class of spatial low-rank models, which can be written as a linear combination of spatial basis functions plus a fine-scale-variation component, parallel spatial inference and prediction for massive distributed data can be carried out exactly, meaning that the results are the same as for a traditional, non-distributed analysis. The communication cost of our distributed algorithms does not depend on the number of data points. After extending our results to the spatio-temporal case, we illustrate our methodology by carrying out distributed spatio-temporal particle filtering inference on total precipitable water measured by three different satellite sensor systems.  相似文献   

10.
Summary. Variational methods have been proposed for obtaining deterministic lower bounds for log-likelihoods within missing data problems, but with little formal justification or investigation of the worth of the lower bound surfaces as tools for inference. We provide, within a general Markovian context, sufficient conditions under which estimators from the variational approximations are asymptotically equivalent to maximum likelihood estimators, and we show empirically, for the simple example of a first-order autoregressive model with missing values, that the lower bound surface can be very similar in shape to the true log-likelihood in non-asymptotic situations.  相似文献   

11.
W. Eschenbach 《Statistics》2013,47(3):451-462
The paper briefly describes methods and results in the statistical analysis of queueing systems  相似文献   

12.
The Heston-STAR model is a new class of stochastic volatility models defined by generalizing the Heston model to allow the volatility of the volatility process as well as the correlation between asset log-returns and variance shocks to change across different regimes via smooth transition autoregressive (STAR) functions. The form of the STAR functions is very flexible, much more so than the functions introduced in Jones (J Econom 116:181–224, 2003), and provides the framework for a wide range of stochastic volatility models. A Bayesian inference approach using data augmentation techniques is used for the parameters of our model. We also explore goodness of fit of our Heston-STAR model. Our analysis of the S&P 500 and VIX index demonstrates that the Heston-STAR model is more capable of dealing with large market fluctuations (such as in 2008) compared to the standard Heston model.  相似文献   

13.
This article describes a full Bayesian treatment for simultaneous fixed-effect selection and parameter estimation in high-dimensional generalized linear mixed models. The approach consists of using a Bayesian adaptive Lasso penalty for signal-level adaptive shrinkage and a fast Variational Bayes scheme for estimating the posterior mode of the coefficients. The proposed approach offers several advantages over the existing methods, for example, the adaptive shrinkage parameters are automatically incorporated, no Laplace approximation step is required to integrate out the random effects. The performance of our approach is illustrated on several simulated and real data examples. The algorithm is implemented in the R package glmmvb and is made available online.  相似文献   

14.
In this paper we derive locally optimal designs for discrete choice experiments. As in Kanninen (2002) we consider a multinomial logistic model, which contains various qualitative attributes as well as a quantitative one, which may range over a sufficiently large interval. The derived optimal designs improve upon those given in the literature, but have the feature that every choice set contains alternatives, which coincide in all but the quantitative attributes. The multinomial logistic model will then lead to a response behavior, which is apparently unrealistic.  相似文献   

15.
In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm uses simulations of ‘subsamples’ from the assumed data-generating model as well as a so-called ‘early-rejection’ strategy to speed up computations in the ABC-MCMC sampler. Using a considerate amount of subsamples does not seem to degrade the quality of the inferential results for the considered applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general and not limited to the exemplified model and data.  相似文献   

16.
This paper is concerned with Bayesian inference in psychometric modeling. It treats conditional likelihood functions obtained from discrete conditional probability distributions which are generalizations of the hypergeometric distribution. The influence of nuisance parameters is eliminated by conditioning on observed values of their sufficient statistics, and Bayesian considerations are only referred to parameters of interest. Since such a combination of techniques to deal with both types of parameters is less common in psychometrics, a wider scope in future research may be gained. The focus is on the evaluation of the empirical appropriateness of assumptions of the Rasch model, thereby pointing to an alternative to the frequentists’ approach which is dominating in this context. A number of examples are discussed. Some are very straightforward to apply. Others are computationally intensive and may be unpractical. The suggested procedure is illustrated using real data from a study on vocational education.  相似文献   

17.
This paper presents a method for Bayesian inference for the regression parameters in a linear model with independent and identically distributed errors that does not require the specification of a parametric family of densities for the error distribution. This method first selects a nonparametric kernel density estimate of the error distribution which is unimodal and based on the least-squares residuals. Once the error distribution is selected, the Metropolis algorithm is used to obtain the marginal posterior distribution of the regression parameters. The methodology is illustrated with data sets, and its performance relative to standard Bayesian techniques is evaluated using simulation results.  相似文献   

18.
In this paper, we present a Bayesian analysis of double seasonal autoregressive moving average models. We first consider the problem of estimating unknown lagged errors in the moving average part using non linear least squares method, and then using natural conjugate and Jeffreys’ priors we approximate the marginal posterior distributions to be multivariate t and gamma distributions for the model coefficients and precision, respectively. We evaluate the proposed Bayesian methodology using simulation study, and apply to real-world hourly electricity load data sets.  相似文献   

19.
In this article, we propose a new technique for constructing confidence intervals for the mean of a noisy sequence with multiple change-points. We use the weighted bootstrap to generalize the bootstrap aggregating or bagging estimator. A standard deviation formula for the bagging estimator is introduced, based on which smoothed confidence intervals are constructed. To further improve the performance of the smoothed interval for weak signals, we suggest a strategy of adaptively choosing between the percentile intervals and the smoothed intervals. A new intensity plot is proposed to visualize the pattern of the change-points. We also propose a new change-point estimator based on the intensity plot, which has superior performance in comparison with the state-of-the-art segmentation methods. The finite sample performance of the confidence intervals and the change-point estimator are evaluated through Monte Carlo studies and illustrated with a real data example.  相似文献   

20.
Consider the nonparametric heteroscedastic regression model Y=m(X)+σ(X)?, where m(·) is an unknown conditional mean function and σ(·) is an unknown conditional scale function. In this paper, the limit distribution of the quantile estimate for the scale function σ(X) is derived. Since the limit distribution depends on the unknown density of the errors, an empirical likelihood ratio statistic based on quantile estimator is proposed. This statistics is used to construct confidence intervals for the variance function. Under certain regularity conditions, it is shown that the quantile estimate of the scale function converges to a Brownian motion and the empirical likelihood ratio statistic converges to a chi-squared random variable. Simulation results demonstrate the superiority of the proposed method over the least squares procedure when the underlying errors have heavy tails.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号