首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
The hazard function describes the instantaneous rate of failure at a time t, given that the individual survives up to t. In applications, the effect of covariates produce changes in the hazard function. When dealing with survival analysis, it is of interest to identify where a change point in time has occurred. In this work, covariates and censored variables are considered in order to estimate a change-point in the Weibull regression hazard model, which is a generalization of the exponential model. For this more general model, it is possible to obtain maximum likelihood estimators for the change-point and for the parameters involved. A Monte Carlo simulation study shows that indeed, it is possible to implement this model in practice. An application with clinical trial data coming from a treatment of chronic granulomatous disease is also included.  相似文献   

2.
ABSTRACT

We propose a semiparametric approach to estimate the existence and location of a statistical change-point to a nonlinear multivariate time series contaminated with an additive noise component. In particular, we consider a p-dimensional stochastic process of independent multivariate normal observations where the mean function varies smoothly except at a single change-point. Our approach involves conducting a Bayesian analysis on the empirical detail coefficients of the original time series after a wavelet transform. If the mean function of our time series can be expressed as a multivariate step function, we find our Bayesian-wavelet method performs comparably with classical parametric methods such as maximum likelihood estimation. The advantage of our multivariate change-point method is seen in how it applies to a much larger class of mean functions that require only general smoothness conditions.  相似文献   

3.
A Bayesian analysis is provided for the Wilcoxon signed-rank statistic (T+). The Bayesian analysis is based on a sign-bias parameter φ on the (0, 1) interval. For the case of a uniform prior probability distribution for φ and for small sample sizes (i.e., 6 ? n ? 25), values for the statistic T+ are computed that enable probabilistic statements about φ. For larger sample sizes, approximations are provided for the asymptotic likelihood function P(T+|φ) as well as for the posterior distribution P(φ|T+). Power analyses are examined both for properly specified Gaussian sampling and for misspecified non Gaussian models. The new Bayesian metric has high power efficiency in the range of 0.9–1 relative to a standard t test when there is Gaussian sampling. But if the sampling is from an unknown and misspecified distribution, then the new statistic still has high power; in some cases, the power can be higher than the t test (especially for probability mixtures and heavy-tailed distributions). The new Bayesian analysis is thus a useful and robust method for applications where the usual parametric assumptions are questionable. These properties further enable a way to do a generic Bayesian analysis for many non Gaussian distributions that currently lack a formal Bayesian model.  相似文献   

4.
A regression model with a possible structural change and with a small number of measurements is considered. A priori information about the shape of the regression function is used to formulate the model as a linear regression model with inequality constraints and a likelihood ratio test for the presence of a change-point is constructed. The exact null distribution of the test statistic is given. Consistency of the test is proved when the noise level goes to zero. Numerical approximations to the powers against various alternatives are given and compared with the powers of the k-linear-r-ahead recursive residuals tests and CUSUM tests. Performance of four different estimators of the change-point is studied in a Monte Carlo experiment. An application of the procedures to some real data is also presented.  相似文献   

5.
We consider a random regression model with several-fold change-points. The results for one change-point are generalized. The maximum likelihood estimator of the parameters is shown to be consistent, and the asymptotic distribution for the estimators of the coefficients is shown to be Gaussian. The estimators of the change-points converge, with n ?1 rate, to the vector whose components are the left end points of the maximizing interval with respect to each change-point. The likelihood process is asymptotically equivalent to the sum of independent compound Poisson processes.  相似文献   

6.
Lin  Tsung I.  Lee  Jack C.  Ni  Huey F. 《Statistics and Computing》2004,14(2):119-130
A finite mixture model using the multivariate t distribution has been shown as a robust extension of normal mixtures. In this paper, we present a Bayesian approach for inference about parameters of t-mixture models. The specifications of prior distributions are weakly informative to avoid causing nonintegrable posterior distributions. We present two efficient EM-type algorithms for computing the joint posterior mode with the observed data and an incomplete future vector as the sample. Markov chain Monte Carlo sampling schemes are also developed to obtain the target posterior distribution of parameters. The advantages of Bayesian approach over the maximum likelihood method are demonstrated via a set of real data.  相似文献   

7.
In this article, Bayesian inference for the half-normal and half-t distributions using uninformative priors is considered. It is shown that exact Bayesian inference can be undertaken for the half-normal distribution without the need for Gibbs sampling. Simulation is then used to compare the sampling properties of Bayesian point and interval estimators with those of their maximum likelihood based counterparts. Inference for the half-t distribution based on the use of Gibbs sampling is outlined, and an approach to model comparison based on the use of Bayes factors is discussed. The fitting of the half-normal and half-t models is illustrated using real data on the body fat measurements of elite athletes.  相似文献   

8.
Two-phase regression models with inequality constraints on the regression coefficients and with a small number of measurements is considered. A new test based on the likelihood ratio in linear model with inequality constraints for the presence of a change-point is proposed. Numerical approximations to the powers against various alternatives are given and compared with the powers of the likelihood ratio test in the two-phase regression models without inequality constraints, the backwards CUSUM test, and the k-linear-r-ahead recursive residuals tests. Performance of related likelihood based estimators of the change-point is briefly studied in a Monte Carlo experiment.  相似文献   

9.
As is the case of many studies, the data collected are limited and an exact value is recorded only if it falls within an interval range. Hence, the responses can be either left, interval or right censored. Linear (and nonlinear) regression models are routinely used to analyze these types of data and are based on normality assumptions for the errors terms. However, those analyzes might not provide robust inference when the normality assumptions are questionable. In this article, we develop a Bayesian framework for censored linear regression models by replacing the Gaussian assumptions for the random errors with scale mixtures of normal (SMN) distributions. The SMN is an attractive class of symmetric heavy-tailed densities that includes the normal, Student-t, Pearson type VII, slash and the contaminated normal distributions, as special cases. Using a Bayesian paradigm, an efficient Markov chain Monte Carlo algorithm is introduced to carry out posterior inference. A new hierarchical prior distribution is suggested for the degrees of freedom parameter in the Student-t distribution. The likelihood function is utilized to compute not only some Bayesian model selection measures but also to develop Bayesian case-deletion influence diagnostics based on the q-divergence measure. The proposed Bayesian methods are implemented in the R package BayesCR. The newly developed procedures are illustrated with applications using real and simulated data.  相似文献   

10.
Summary The problem of predicting the number of change points in a piecewise linear model is studied from a Bayesian viewpoint. For a given a priori joint probability functionf R,C=fRf C/R, whereR is the number of change points andC=C′(R)=(C1,…,CR) is the change-point epoch vector, the marginal posterior probability functionf R.C/Y is obtained, and then used to find predictors forR andC(R).  相似文献   

11.
Usual derivation of BIC for the marginal likelihood of a model or hypothesis via Laplace approximation does not hold for a change-point which is a discrete parameter. We provide an analogue l BIC, which is a lower bound to the marginal likelihood of a model with change points and has an approximation error up to Op(1) like standard Schwartz BIC. Several applications are provided covering simulated r.v.'s and real financial figures on short-term interest rate.  相似文献   

12.
The Student’s t distribution has become increasingly prominent and is considered as a competitor to the normal distribution. Motivated by real examples in Physics, decision sciences and Bayesian statistics, a new t distribution is introduced by taking the product of two Student’s t pdfs. Various structural properties of this distribution are derived, including its cdf, moments, mean deviation about the mean, mean deviation about the median, entropy, asymptotic distribution of the extreme order statistics, maximum likelihood estimates and the Fisher information matrix. Finally, an application to a Bayesian testing problem is illustrated.  相似文献   

13.
The well-known Wilson and Agresti–Coull confidence intervals for a binomial proportion p are centered around a Bayesian estimator. Using this as a starting point, similarities between frequentist confidence intervals for proportions and Bayesian credible intervals based on low-informative priors are studied using asymptotic expansions. A Bayesian motivation for a large class of frequentist confidence intervals is provided. It is shown that the likelihood ratio interval for p approximates a Bayesian credible interval based on Kerman’s neutral noninformative conjugate prior up to O(n? 1) in the confidence bounds. For the significance level α ? 0.317, the Bayesian interval based on the Jeffreys’ prior is then shown to be a compromise between the likelihood ratio and Wilson intervals. Supplementary materials for this article are available online.  相似文献   

14.
Three test statistics for a change-point in a linear model, variants of those considered by Andrews and Ploberger [Optimal tests when a nusiance parameter is present only under the alternative. Econometrica. 1994;62:1383–1414]: the sup-likelihood ratio (LR) statistic; a weighted average of the exponential of LR-statistics and a weighted average of LR-statistics, are studied. Critical values for the statistics with time trend regressors, obtained via simulation, are found to vary considerably, depending on conditions on the error terms. The performance of the bootstrap in approximating p-values of the distributions is assessed in a simulation study. A sample approximation to asymptotic analytical expressions extending those of Kim and Siegmund [The likelihood ratio test for a change-point in simple linear regression. Biometrika. 1989;76:409–423] in the case of the sup-LR test is also assessed. The approximations and bootstrap are applied to the Quandt data [The estimation of a parameter of a linear regression system obeying two separate regimes. J Amer Statist Assoc. 1958;53:873–880] and real data concerning a change-point in oxygen uptake during incremental exercise testing and the bootstrap gives reasonable results.  相似文献   

15.
In most software reliability models which utilize the nonhomogeneous Poisson process (NHPP), the intensity function for the counting process is usually assumed to be continuous and monotone. However, on account of various practical reasons, there may exist some change points in the intensity function and thus the assumption of continuous and monotone intensity function may be unrealistic in many real situations. In this article, the Bayesian change-point approach using beta-mixtures for modeling the intensity function with possible change points is proposed. The hidden Markov model with non constant transition probabilities is applied to the beta-mixture for detecting the change points of the parameters. The estimation and interpretation of the model is illustrated using the Naval Tactical Data System (NTDS) data. The proposed change point model will be also compared with the competing models via marginal likelihood. It can be seen that the proposed model has the highest marginal likelihood and outperforms the competing models.  相似文献   

16.
This paper presents a methodology for model fitting and inference in the context of Bayesian models of the type f(Y | X,θ)f(X|θ)f(θ), where Y is the (set of) observed data, θ is a set of model parameters and X is an unobserved (latent) stationary stochastic process induced by the first order transition model f(X (t+1)|X (t),θ), where X (t) denotes the state of the process at time (or generation) t. The crucial feature of the above type of model is that, given θ, the transition model f(X (t+1)|X (t),θ) is known but the distribution of the stochastic process in equilibrium, that is f(X|θ), is, except in very special cases, intractable, hence unknown. A further point to note is that the data Y has been assumed to be observed when the underlying process is in equilibrium. In other words, the data is not collected dynamically over time. We refer to such specification as a latent equilibrium process (LEP) model. It is motivated by problems in population genetics (though other applications are discussed), where it is of interest to learn about parameters such as mutation and migration rates and population sizes, given a sample of allele frequencies at one or more loci. In such problems it is natural to assume that the distribution of the observed allele frequencies depends on the true (unobserved) population allele frequencies, whereas the distribution of the true allele frequencies is only indirectly specified through a transition model. As a hierarchical specification, it is natural to fit the LEP within a Bayesian framework. Fitting such models is usually done via Markov chain Monte Carlo (MCMC). However, we demonstrate that, in the case of LEP models, implementation of MCMC is far from straightforward. The main contribution of this paper is to provide a methodology to implement MCMC for LEP models. We demonstrate our approach in population genetics problems with both simulated and real data sets. The resultant model fitting is computationally intensive and thus, we also discuss parallel implementation of the procedure in special cases.  相似文献   

17.
Consider an inhomogeneous Poisson process X on [0, T] whose unk-nown intensity function “switches” from a lower function g* to an upper function h* at some unknown point ?* that has to be identified. We consider two known continuous functions g and h such that g*(t) ? g(t) < h(t) ? h*(t) for 0 ? t ? T. We describe the behavior of the generalized likelihood ratio and Wald’s tests constructed on the basis of a misspecified model in the asymptotics of large samples. The power functions are studied under local alternatives and compared numerically with help of simulations. We also show the following robustness result: the Type I error rate is preserved even though a misspecified model is used to construct tests.  相似文献   

18.
The two-parameter lognormal distribution with density function f(y: γ, σ2) = [(2πσ2)1/2y] 1exp[?(ln y ? γ)2/2σ2], y > 0, is important as a failure-time model in life testing. In this paper, Bayesian lower bounds for the reliability function R(t: γ, σ2) = ?[(γ ? ln t)/σ] are obtained for two cases. First, it is assumed that γ is known and σ2 has either an inverted gamma or “general uniform” prior distribution. Then, for the case that both γ and σ2 are unknown, the normal-gamma prior and Jeffreys' vague prior are considered. Some Monte Carlo simulations are given to indicate some of the properties of the Bayesian lower bounds.  相似文献   

19.
Profile monitoring is the use of control charts for cases in which the quality of a process or product can be characterized by a functional relationship between a response variable and one or more explanatory variables. Unlike the linear profile's simple structure, the nonlinear profile has relatively less attainment because of high complexity. Regression model is the initial method to analyze the phase I of nonlinear profiles, but it lacks sensitivity for local characteristic changes. This article presents a strategy comprising two major components: data-segmentation, to concisely detect the location of local change by overlaying grid points onto horizontal axis, and change-point detection via the maximum likelihood estimate. Simulated data set of a polynomial profile is used to illustrate the effectiveness of the proposed strategy, and is compared with Williams' T2 multi-variable statistics.  相似文献   

20.
Abstract

Using simultaneous Bayesian modeling, an attempt is made to analyze data on the size of lymphedema occurring in the arms of breast cancer patients after breast cancer surgery (as the longitudinal data) and the time interval for disease progression (as the time-to-event occurrence). A model based on a multivariate skew t distribution is shown to provide the best fit. This outcome was confirmed by simulation studies too.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号