首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT

In this paper we propose a class of skewed t link models for analyzing binary response data with covariates. It is a class of asymmetric link models designed to improve the overall fit when commonly used symmetric links, such as the logit and probit links, do not provide the best fit available for a given binary response dataset. Introducing a skewed t distribution for the underlying latent variable, we develop the class of models. For the analysis of the models, a Bayesian and non-Bayesian methods are pursued using a Markov chain Monte Carlo (MCMC) sampling based approach. Necessary theories involved in modelling and computation are provided. Finally, a simulation study and a real data example are used to illustrate the proposed methodology.  相似文献   

2.
The article derives Bartlett corrections for improving the chi-square approximation to the likelihood ratio statistics in a class of symmetric nonlinear regression models. This is a wide class of models which encompasses the t model and several other symmetric distributions with longer-than normal tails. In this paper we present, in matrix notation, Bartlett corrections to likelihood ratio statistics in nonlinear regression models with errors that follow a symmetric distribution. We generalize the results obtained by Ferrari, S. L. P. and Arellano-Valle, R. B. (1996). Modified likelihood ratio and score tests in linear regression models using the t distribution. Braz. J. Prob. Statist., 10, 15–33, who considered a t distribution for the errors, and by Ferrari, S. L. P. and Uribe-Opazo, M. A. (2001). Corrected likelihood ratio tests in a class of symmetric linear regression models. Braz. J. Prob. Statist., 15, 49–67, who considered a symmetric linear regression model. The formulae derived are simple enough to be used analytically to obtain several Bartlett corrections in a variety of important models. We also present simulation results comparing the sizes and powers of the usual likelihood ratio tests and their Bartlett corrected versions.  相似文献   

3.
Imperfect repair models are a class of stochastic models that deal with recurrent phenomena. This article focuses on the Block, Borges, and Savits (1985) age-dependent minimal repair model (the BBS model) in which a system that fails at time t undergoes one of two types of repair: with probability p(t), a perfect repair is performed, or with probability 1-p(t), a minimal repair is performed. The goodness-of-fit problem of interest concerns the initial distribution of the failure ages. In particular, interest is on testing the null hypothesis that the hazard rate function of the time-to-first-event-occurrence, λ(·), is equal to a prespecified hazard rate function λ0(·). This paper extends the class of hazard-based smooth goodness-of-fit tests introduced in Peña (1998a) to the case where data accrual is from a BBS model. The goodness-of-fit tests are score tests derived by reformulating Neyman's idea of smooth tests in terms of hazard functions. Omnibus as well as directional tests are developed and simulation results are presented to illustrate the sensitivities of the proposed tests for certain types of alternatives.  相似文献   

4.
ABSTRACT

In panel data models and other regressions with unobserved effects, fixed effects estimation is often paired with cluster-robust variance estimation (CRVE) to account for heteroscedasticity and un-modeled dependence among the errors. Although asymptotically consistent, CRVE can be biased downward when the number of clusters is small, leading to hypothesis tests with rejection rates that are too high. More accurate tests can be constructed using bias-reduced linearization (BRL), which corrects the CRVE based on a working model, in conjunction with a Satterthwaite approximation for t-tests. We propose a generalization of BRL that can be applied in models with arbitrary sets of fixed effects, where the original BRL method is undefined, and describe how to apply the method when the regression is estimated after absorbing the fixed effects. We also propose a small-sample test for multiple-parameter hypotheses, which generalizes the Satterthwaite approximation for t-tests. In simulations covering a wide range of scenarios, we find that the conventional cluster-robust Wald test can severely over-reject while the proposed small-sample test maintains Type I error close to nominal levels. The proposed methods are implemented in an R package called clubSandwich. This article has online supplementary materials.  相似文献   

5.
Abstract

In this paper the problem of finding exactly optimal sampling designs for estimating the weighted integral of a stochastic process with a product covariance structure (R(s,t)=u(s)v(t), s<t) is discussed. The sampling designs for certain standard processes belonging to the product class are calculated. An asymptotic solution to the design problem also follows as a consequence.  相似文献   

6.
This article proposes a class of multivariate bilateral selection t distributions useful for analyzing non-normal (skewed and/or bimodal) multivariate data. The class is associated with a bilateral selection mechanism, and it is obtained from a marginal distribution of the centrally truncated multivariate t. It is flexible enough to include the multivariate t and multivariate skew-t distributions and mathematically tractable enough to account for central truncation of a hidden t variable. The class, closed under linear transformation, marginal, and conditional operations, is studied from several aspects such as shape of the probability density function, conditioning of a distribution, scale mixtures of multivariate normal, and a probabilistic representation. The relationships among these aspects are given, and various properties of the class are also discussed. Necessary theories and two applications are provided.  相似文献   

7.
Let X  = (X, Y) be a pair of lifetimes whose dependence structure is described by an Archimedean survival copula, and let X t  = [(X ? t, Y ? t) | X > t, Y > t] denotes the corresponding pair of residual lifetimes after time t ≥ 0. Multivariate aging notions, defined by means of stochastic comparisons between X and X t , with t ≥ 0, were studied in Pellerey (2008 Pellerey , F. ( 2008 ). On univariate and bivariate aging for dependent lifetimes with Archimedean survival copulas . Kybernetika 44 : 795806 .[Web of Science ®] [Google Scholar]), who considered pairs of lifetimes having the same marginal distribution. Here, we present the generalizations of his results, considering both stochastic comparisons between X t and X t+s for all t, s ≥ 0 and the case of dependent lifetimes having different distributions. Comparisons between two different pairs of residual lifetimes, at any time t ≥ 0, are discussed as well.  相似文献   

8.
ABSTRACT

Nonhomogeneous Poisson processes (NHPP) provide many models for hardware and software reliability analysis. In order to get an appropriate NHPP model, goodness-of-Fit (GOF for short) tests have to be carried out. For the power-law processes, lots of GOF tests have been developed. For other NHPP models, only the Conditional Probability Integral Transformation (CPIT) test has been proposed. However, the CPIT test is less powerful and cannot be applied to some NHPP models. This article proposes a general GOF test based on the Laplace statistic for a large class of NHPP models with intensity functions of the form αλ(t, β). The simulation results show that this test is more powerful than CPIT test.  相似文献   

9.
This paper investigates two “non-exact” t-type tests, t( k2) and t(k2), of the individual coefficients of a linear regression model, based on two ordinary ridge estimators. The reported results are built on a simulation study covering 84 different models. For models with large standard errors, the ridge-based t-tests have correct levels with considerable gain in powers over those of the least squares t-test, t(0). For models with small standard errors, t(k1) is found to be liberal and is not safe to use while, t(k2) is found to slightly exceed the nominal level in few cases. When tie two ridge tests art: not winners, the results indicate that they don't loose much against t(0).  相似文献   

10.
ABSTRACT

In the stepwise procedure of selection of a fixed or a random explanatory variable in a mixed quantitative linear model with errors following a Gaussian stationary autocorrelated process, we have studied the efficiency of five estimators relative to Generalized Least Squares (GLS): Ordinary Least Squares (OLS), Maximum Likelihood (ML), Restricted Maximum Likelihood (REML), First Differences (FD), and First-Difference Ratios (FDR). We have also studied the validity and power of seven derived testing procedures, to assess the significance of the slope of the candidate explanatory variable x 2 to enter the model in which there is already one regressor x 1. In addition to five testing procedures of the literature, we considered the FDR t-test with n ? 3 df and the modified t-test with n? ? 3 df for partial correlations, where n? is Dutilleul's effective sample size. Efficiency, validity, and power were analyzed by Monte Carlo simulations, as functions of the nature, fixed vs. random (purely random or autocorrelated), of x 1 and x 2, the sample size and the autocorrelation of random terms in the regression model. We report extensive results for the autocorrelation structure of first-order autoregressive [AR(1)] type, and discuss results we obtained for other autocorrelation structures, such as spherical semivariogram, first-order moving average [MA(1)] and ARMA(1,1), but we could not present because of space constraints. Overall, we found that:
  1. the efficiency of slope estimators and the validity of testing procedures depend primarily on the nature of x 2, but not on that of x 1;

  2. FDR is the most inefficient slope estimator, regardless of the nature of x 1 and x 2;

  3. REML is the most efficient of the slope estimators compared relative to GLS, provided the specified autocorrelation structure is correct and the sample size is large enough to ensure the convergence of its optimization algorithm;

  4. the FDR t-test, the modified t-test and the REML t-test are the most valid of the testing procedures compared, despite the inefficiency of the FDR and OLS slope estimators for the former two;

  5. the FDR t-test, however, suffers from a lack of power that varies with the nature of x 1 and x 2; and

  6. the modified t-test for partial correlations, which does not require the specification of an autocorrelation structure, can be recommended when x 1 is fixed or random and x 2 is random, whether purely random or autocorrelated. Our results are illustrated by the environmental data that motivated our work.

  相似文献   

11.
Abstract

The Birnbaum-Saunders (BS) distribution is an asymmetric probability model that is receiving considerable attention. In this article, we propose a methodology based on a new class of BS models generated from the Student-t distribution. We obtain a recurrence relationship for a BS distribution based on a nonlinear skew–t distribution. Model parameters estimators are obtained by means of the maximum likelihood method, which are evaluated by Monte Carlo simulations. We illustrate the obtained results by analyzing two real data sets. These data analyses allow the adequacy of the proposed model to be shown and discussed by applying model selection tools.  相似文献   

12.
Abstract

Satten et al. [Satten, G. A., Datta, S., Robins, J. M. (2001). Estimating the marginal survival function in the presence of time dependent covariates. Statis. Prob. Lett. 54: 397--403] proposed an estimator [denoted by ?(t)] of survival function of failure times that is in the class of survival function estimators proposed by Robins [Robins, J. M. (1993). Information recovery and bias adjustment in proportional hazards regression analysis of randomized trials using surrogate markers. In: Proceedings of the American Statistical Association-Biopharmaceutical Section. Alexandria, VA: ASA, pp. 24--33]. The estimator is appropriate when data are subject to dependent censoring. In this article, it is demonstrated that the estimator ?(t) can be extended to estimate the survival function when data are subject to dependent censoring and left truncation. In addition, we propose an alternative estimator of survival function [denoted by ? w (t)] that is represented as an inverse-probability-weighted average Satten and Datta [Satten, G. A., Datta, S. (2001). The Kaplan–Meier estimator as an inverse-probability-of-censoring weighted average. Amer. Statist. Ass. 55: 207--210]. Simulation results show that when truncation is not severe the mean squared error of ?(t) is smaller than that of ? w (t), except for the case when censoring is light. However, when truncation is severe, ? w (t) has the advantage of less bias and the situation can be reversed.  相似文献   

13.
《随机性模型》2013,29(1):215-234
ABSTRACT

A basic difficulty in dealing with heavy-tailed distributions is that they may not have explicit Laplace transforms. This makes numerical methods that use the Laplace transform more challenging. This paper generalizes an existing method for approximating heavy-tailed distributions, for use in queueing analysis. The generalization involves fitting Chebyshev polynomials to a probability density function g(t) at specified points t 1, t 2, …, t N . By choosing points t i , which rapidly get far out in the tail, it is possible to capture the tail behavior with relatively few points, and to control the relative error in the approximation. We give numerical examples to evaluate the performance of the method in simple queueing problems.  相似文献   

14.
ABSTRACT

In many clinical studies, patients are followed over time with their responses measured longitudinally. Using mixed model theory, one can characterize these data using a wide array of across subject models. A state-space representation of the mixed effects model and use of the Kalman filter allows one to have great flexibility in choosing the within error correlation structure even in the presence of missing or unequally spaced observations. Furthermore, using the state-space approach, one can avoid inverting large matrices resulting in efficient computation. The approach also allows one to make detailed inference about the error correlation structure. We consider a bivariate situation where the longitudinal responses are unequally spaced and assume that the within subject errors follows a continuous first-order autoregressive (CAR(1)) structure. Since a large number of nonlinear parameters need to be estimated, the modeling strategy and numerical techniques are critical in the process. We developed both a Visual Fortran® and a SAS® program for modeling such data. A simulation study was conducted to investigate the robustness of the model assumptions. We also use data from a psychiatric study to demonstrate our model fitting procedure.  相似文献   

15.
16.
This paper presents a methodology for model fitting and inference in the context of Bayesian models of the type f(Y | X,θ)f(X|θ)f(θ), where Y is the (set of) observed data, θ is a set of model parameters and X is an unobserved (latent) stationary stochastic process induced by the first order transition model f(X (t+1)|X (t),θ), where X (t) denotes the state of the process at time (or generation) t. The crucial feature of the above type of model is that, given θ, the transition model f(X (t+1)|X (t),θ) is known but the distribution of the stochastic process in equilibrium, that is f(X|θ), is, except in very special cases, intractable, hence unknown. A further point to note is that the data Y has been assumed to be observed when the underlying process is in equilibrium. In other words, the data is not collected dynamically over time. We refer to such specification as a latent equilibrium process (LEP) model. It is motivated by problems in population genetics (though other applications are discussed), where it is of interest to learn about parameters such as mutation and migration rates and population sizes, given a sample of allele frequencies at one or more loci. In such problems it is natural to assume that the distribution of the observed allele frequencies depends on the true (unobserved) population allele frequencies, whereas the distribution of the true allele frequencies is only indirectly specified through a transition model. As a hierarchical specification, it is natural to fit the LEP within a Bayesian framework. Fitting such models is usually done via Markov chain Monte Carlo (MCMC). However, we demonstrate that, in the case of LEP models, implementation of MCMC is far from straightforward. The main contribution of this paper is to provide a methodology to implement MCMC for LEP models. We demonstrate our approach in population genetics problems with both simulated and real data sets. The resultant model fitting is computationally intensive and thus, we also discuss parallel implementation of the procedure in special cases.  相似文献   

17.
One method of assessing the fit of an event history model is to plot the empirical standard deviation of standardised martingale residuals. We develop an alternative procedure which is valid also in the presence of measurement error and applicable to both longitudinal and recurrent event data. Since the covariance between martingale residuals at times t 0 and t > t 0 is independent of t, a plot of these covariances should, for fixed t 0, have no time trend. A test statistic is developed from the increments in the estimated covariances, and we investigate its properties under various types of model misspecification. Applications of the approach are presented using two Brazilian studies measuring daily prevalence and incidence of infant diarrhoea and a longitudinal study into treatment of schizophrenia.  相似文献   

18.
We extend a diagnostic plot for the frailty distribution in proportional hazards models to the case of shared frailty. The plot is based on a closure property of exponential family failure distributions with canonical statistics z and g(z), namely that the frailty distribution among survivors at time t has the same form, with the same values of the parameters associated with g(z). We extend this property to shared frailty, considering various definitions of a “surviving” cluster at time t. We illustrate the effectiveness of the method in the case where the “death” of the cluster is defined by the first death among its members.  相似文献   

19.
20.
ABSTRACT

In many real life problems one assumes a normal model because the sample histogram looks unimodal, symmetric, and/or the standard tests like the Shapiro-Wilk test favor such a model. However, in reality, the assumption of normality may be misplaced since the normality tests often fail to detect departure from normality (especially for small sample sizes) when the data actually comes from slightly heavier tail symmetric unimodal distributions. For this reason it is important to see how the existing normal variance estimators perform when the actual distribution is a t-distribution with k degrees of freedom (d.f.) (t k -distribution). This note deals with the performance of standard normal variance estimators under the t k -distributions. It is shown that the relative ordering of the estimators is preserved for both the quadratic loss as well as the entropy loss irrespective of the d.f. and the sample size (provided the risks exist).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号