首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
One method of assessing the fit of an event history model is to plot the empirical standard deviation of standardised martingale residuals. We develop an alternative procedure which is valid also in the presence of measurement error and applicable to both longitudinal and recurrent event data. Since the covariance between martingale residuals at times t 0 and t > t 0 is independent of t, a plot of these covariances should, for fixed t 0, have no time trend. A test statistic is developed from the increments in the estimated covariances, and we investigate its properties under various types of model misspecification. Applications of the approach are presented using two Brazilian studies measuring daily prevalence and incidence of infant diarrhoea and a longitudinal study into treatment of schizophrenia.  相似文献   

2.
We consider the specific transformation of a Wiener process {X(t), t ≥ 0} in the presence of an absorbing barrier a that results when this process is “time-locked” with respect to its first passage time T a through a criterion level a, and the evolution of X(t) is considered backwards (retrospectively) from T a . Formally, we study the random variables defined by Y(t) ≡ X(T a  ? t) and derive explicit results for their density and mean, and also for their asymptotic forms. We discuss how our results can aid interpretations of time series “response-locked” to their times of crossing a criterion level.  相似文献   

3.
In partly linear models, the dependence of the response y on (x T, t) is modeled through the relationship y=x T β+g(t)+?, where ? is independent of (x T, t). We are interested in developing an estimation procedure that allows us to combine the flexibility of the partly linear models, studied by several authors, but including some variables that belong to a non-Euclidean space. The motivating application of this paper deals with the explanation of the atmospheric SO2 pollution incidents using these models when some of the predictive variables belong in a cylinder. In this paper, the estimators of β and g are constructed when the explanatory variables t take values on a Riemannian manifold and the asymptotic properties of the proposed estimators are obtained under suitable conditions. We illustrate the use of this estimation approach using an environmental data set and we explore the performance of the estimators through a simulation study.  相似文献   

4.
This paper presents a methodology for model fitting and inference in the context of Bayesian models of the type f(Y | X,θ)f(X|θ)f(θ), where Y is the (set of) observed data, θ is a set of model parameters and X is an unobserved (latent) stationary stochastic process induced by the first order transition model f(X (t+1)|X (t),θ), where X (t) denotes the state of the process at time (or generation) t. The crucial feature of the above type of model is that, given θ, the transition model f(X (t+1)|X (t),θ) is known but the distribution of the stochastic process in equilibrium, that is f(X|θ), is, except in very special cases, intractable, hence unknown. A further point to note is that the data Y has been assumed to be observed when the underlying process is in equilibrium. In other words, the data is not collected dynamically over time. We refer to such specification as a latent equilibrium process (LEP) model. It is motivated by problems in population genetics (though other applications are discussed), where it is of interest to learn about parameters such as mutation and migration rates and population sizes, given a sample of allele frequencies at one or more loci. In such problems it is natural to assume that the distribution of the observed allele frequencies depends on the true (unobserved) population allele frequencies, whereas the distribution of the true allele frequencies is only indirectly specified through a transition model. As a hierarchical specification, it is natural to fit the LEP within a Bayesian framework. Fitting such models is usually done via Markov chain Monte Carlo (MCMC). However, we demonstrate that, in the case of LEP models, implementation of MCMC is far from straightforward. The main contribution of this paper is to provide a methodology to implement MCMC for LEP models. We demonstrate our approach in population genetics problems with both simulated and real data sets. The resultant model fitting is computationally intensive and thus, we also discuss parallel implementation of the procedure in special cases.  相似文献   

5.
《随机性模型》2013,29(1):215-234
ABSTRACT

A basic difficulty in dealing with heavy-tailed distributions is that they may not have explicit Laplace transforms. This makes numerical methods that use the Laplace transform more challenging. This paper generalizes an existing method for approximating heavy-tailed distributions, for use in queueing analysis. The generalization involves fitting Chebyshev polynomials to a probability density function g(t) at specified points t 1, t 2, …, t N . By choosing points t i , which rapidly get far out in the tail, it is possible to capture the tail behavior with relatively few points, and to control the relative error in the approximation. We give numerical examples to evaluate the performance of the method in simple queueing problems.  相似文献   

6.
ABSTRACT

In this paper we propose a class of skewed t link models for analyzing binary response data with covariates. It is a class of asymmetric link models designed to improve the overall fit when commonly used symmetric links, such as the logit and probit links, do not provide the best fit available for a given binary response dataset. Introducing a skewed t distribution for the underlying latent variable, we develop the class of models. For the analysis of the models, a Bayesian and non-Bayesian methods are pursued using a Markov chain Monte Carlo (MCMC) sampling based approach. Necessary theories involved in modelling and computation are provided. Finally, a simulation study and a real data example are used to illustrate the proposed methodology.  相似文献   

7.
ABSTRACT

This article suggests a chi-square test of fit for parametric families of bivariate copulas. The marginal distribution functions are assumed to be unknown and are estimated by their empirical counterparts. Therefore, the standard asymptotic theory of the test is not applicable, but we derive a rule for the determination of the appropriate degrees of freedom in the asymptotic chi-square distribution. The behavior of the test under H 0 and for selected alternatives is investigated by Monte Carlo simulation. The test is applied to investigate the dependence structure of daily German asset returns. It turns out that the Gauss copula is inappropriate to describe the dependencies in the data. A t ν-copula with low degrees of freedom performs better.  相似文献   

8.
Abstract

Using simultaneous Bayesian modeling, an attempt is made to analyze data on the size of lymphedema occurring in the arms of breast cancer patients after breast cancer surgery (as the longitudinal data) and the time interval for disease progression (as the time-to-event occurrence). A model based on a multivariate skew t distribution is shown to provide the best fit. This outcome was confirmed by simulation studies too.  相似文献   

9.
Distributions of a response y (height, for example) differ with values of a factor t (such as age). Given a response y* for a subject of unknown t*, the objective of inverse prediction is to infer the value of t* and to provide a defensible confidence set for it. Training data provide values of y observed on subjects at known values of t. Models relating the mean and variance of y to t can be formulated as mixed (fixed and random) models in terms of sets of functions of t, such as polynomial spline functions. A confidence set on t* can then be had as those hypothetical values of t for which y* is not detected as an outlier when compared to the model fit to the training data. With nonconstant variance, the p-values for these tests are approximate. This article describes how versatile models for this problem can be formulated in such a way that the computations can be accomplished with widely available software for mixed models, such as SAS PROC MIXED. Coverage probabilities of confidence sets on t* are illustrated in an example.  相似文献   

10.
A life distribution is said to have a weak memoryless property if its conditional probability of survival beyond a fixed time point is equal to its (unconditional) survival probability at that point. Goodness‐of‐fit testing of this notion is proposed in the current investigation, both when the fixed time point is known and when it is unknown but estimable from the data. The limiting behaviour of the proposed test statistic is obtained and the null variance is explicitly given. The empirical power of the test is evaluated for a commonly known alternative using Monte Carlo methods, showing that the test performs well. The case when the fixed time point t0 equals a quantile of the distribution F gives a distribution‐free test procedure. The procedure works even if t0 is unknown but is estimable.  相似文献   

11.
ABSTRACT

In the stepwise procedure of selection of a fixed or a random explanatory variable in a mixed quantitative linear model with errors following a Gaussian stationary autocorrelated process, we have studied the efficiency of five estimators relative to Generalized Least Squares (GLS): Ordinary Least Squares (OLS), Maximum Likelihood (ML), Restricted Maximum Likelihood (REML), First Differences (FD), and First-Difference Ratios (FDR). We have also studied the validity and power of seven derived testing procedures, to assess the significance of the slope of the candidate explanatory variable x 2 to enter the model in which there is already one regressor x 1. In addition to five testing procedures of the literature, we considered the FDR t-test with n ? 3 df and the modified t-test with n? ? 3 df for partial correlations, where n? is Dutilleul's effective sample size. Efficiency, validity, and power were analyzed by Monte Carlo simulations, as functions of the nature, fixed vs. random (purely random or autocorrelated), of x 1 and x 2, the sample size and the autocorrelation of random terms in the regression model. We report extensive results for the autocorrelation structure of first-order autoregressive [AR(1)] type, and discuss results we obtained for other autocorrelation structures, such as spherical semivariogram, first-order moving average [MA(1)] and ARMA(1,1), but we could not present because of space constraints. Overall, we found that:
  1. the efficiency of slope estimators and the validity of testing procedures depend primarily on the nature of x 2, but not on that of x 1;

  2. FDR is the most inefficient slope estimator, regardless of the nature of x 1 and x 2;

  3. REML is the most efficient of the slope estimators compared relative to GLS, provided the specified autocorrelation structure is correct and the sample size is large enough to ensure the convergence of its optimization algorithm;

  4. the FDR t-test, the modified t-test and the REML t-test are the most valid of the testing procedures compared, despite the inefficiency of the FDR and OLS slope estimators for the former two;

  5. the FDR t-test, however, suffers from a lack of power that varies with the nature of x 1 and x 2; and

  6. the modified t-test for partial correlations, which does not require the specification of an autocorrelation structure, can be recommended when x 1 is fixed or random and x 2 is random, whether purely random or autocorrelated. Our results are illustrated by the environmental data that motivated our work.

  相似文献   

12.
13.
The adjusted r2 algorithm is a popular automated method for selecting the start time of the terminal disposition phase (tz) when conducting a noncompartmental pharmacokinetic data analysis. Using simulated data, the performance of the algorithm was assessed in relation to the ratio of the slopes of the preterminal and terminal disposition phases, the point of intercept of the terminal disposition phase with the preterminal disposition phase, the length of the terminal disposition phase captured in the concentration‐time profile, the number of data points present in the terminal disposition phase, and the level of variability in concentration measurement. The adjusted r2 algorithm was unable to identify tz accurately when there were more than three data points present in a profile's terminal disposition phase. The terminal disposition phase rate constant (λz) calculated based on the value of tz selected by the algorithm had a positive bias in all simulation data conditions. Tolerable levels of bias (median bias less than 5%) were achieved under conditions of low measurement variability. When measurement variability was high, tolerable levels of bias were attained only when the terminal phase time span was 4 multiples of t1/2 or longer. A comparison of the performance of the adjusted r2 algorithm, a simple r2 algorithm, and tz selection by visual inspection was conducted using a subset of the simulation data. In the comparison, the simple r2 algorithm performed as well as the adjusted r2 algorithm and the visual inspection method outperformed both algorithms. Recommendations concerning the use of the various tz selection methods are presented.  相似文献   

14.
We present a decomposition of the correlation coefficient between xt and xt?k into three terms that include the partial and inverse autocorrelations. The first term accounts for the portion of the autocorrelation that is explained by the inner variables {xt?1 , xt?2 , …, x t? k+1}, the second one measures the portion explained by the outer variables {x t+1, x t+2, } ∪ {x t?k?1, x t?k?2,…} and the third term measures the correlation between x t and xt?k given all other variables. These terms, squared and summed, can form the basis of three portmanteau-type tests that are able to detect both deviation from white noise and lack of fit of an entertained model. Quantiles of their asymptotic sample distributions are complicated to derive at an adequate level of accuracy, so they are approximated using the Monte Carlo method. A simulation experiment is carried out to investigate significance levels and power of each test, and compare them to the portmanteau test.  相似文献   

15.
Abstract

The Birnbaum-Saunders (BS) distribution is an asymmetric probability model that is receiving considerable attention. In this article, we propose a methodology based on a new class of BS models generated from the Student-t distribution. We obtain a recurrence relationship for a BS distribution based on a nonlinear skew–t distribution. Model parameters estimators are obtained by means of the maximum likelihood method, which are evaluated by Monte Carlo simulations. We illustrate the obtained results by analyzing two real data sets. These data analyses allow the adequacy of the proposed model to be shown and discussed by applying model selection tools.  相似文献   

16.
This paper considers estimation of the function g in the model Yt = g(Xt ) + ?t when E(?t|Xt) ≠ 0 with nonzero probability. We assume the existence of an instrumental variable Zt that is independent of ?t, and of an innovation ηt = XtE(Xt|Zt). We use a nonparametric regression of Xt on Zt to obtain residuals ηt, which in turn are used to obtain a consistent estimator of g. The estimator was first analyzed by Newey, Powell & Vella (1999) under the assumption that the observations are independent and identically distributed. Here we derive a sample mean‐squared‐error convergence result for independent identically distributed observations as well as a uniform‐convergence result under time‐series dependence.  相似文献   

17.
Let X  = (X, Y) be a pair of lifetimes whose dependence structure is described by an Archimedean survival copula, and let X t  = [(X ? t, Y ? t) | X > t, Y > t] denotes the corresponding pair of residual lifetimes after time t ≥ 0. Multivariate aging notions, defined by means of stochastic comparisons between X and X t , with t ≥ 0, were studied in Pellerey (2008 Pellerey , F. ( 2008 ). On univariate and bivariate aging for dependent lifetimes with Archimedean survival copulas . Kybernetika 44 : 795806 .[Web of Science ®] [Google Scholar]), who considered pairs of lifetimes having the same marginal distribution. Here, we present the generalizations of his results, considering both stochastic comparisons between X t and X t+s for all t, s ≥ 0 and the case of dependent lifetimes having different distributions. Comparisons between two different pairs of residual lifetimes, at any time t ≥ 0, are discussed as well.  相似文献   

18.
In this article, we consider a sample point (t j , s j ) including a value s j  = f(t j ) at height s j and abscissa (time or location) t j . We apply wavelet decomposition by using shifts and dilations of the basic Häar transform and obtain an algorithm to analyze a signal or function f. We use this algorithm in practical to approximating function by numerical example. Some relationships between wavelets coefficients and asymptotic distribution of wavelet coefficients are investigated. At the end, we illustrate the results on simulated data by using MATLAB and R software.  相似文献   

19.
Abstract

Satten et al. [Satten, G. A., Datta, S., Robins, J. M. (2001). Estimating the marginal survival function in the presence of time dependent covariates. Statis. Prob. Lett. 54: 397--403] proposed an estimator [denoted by ?(t)] of survival function of failure times that is in the class of survival function estimators proposed by Robins [Robins, J. M. (1993). Information recovery and bias adjustment in proportional hazards regression analysis of randomized trials using surrogate markers. In: Proceedings of the American Statistical Association-Biopharmaceutical Section. Alexandria, VA: ASA, pp. 24--33]. The estimator is appropriate when data are subject to dependent censoring. In this article, it is demonstrated that the estimator ?(t) can be extended to estimate the survival function when data are subject to dependent censoring and left truncation. In addition, we propose an alternative estimator of survival function [denoted by ? w (t)] that is represented as an inverse-probability-weighted average Satten and Datta [Satten, G. A., Datta, S. (2001). The Kaplan–Meier estimator as an inverse-probability-of-censoring weighted average. Amer. Statist. Ass. 55: 207--210]. Simulation results show that when truncation is not severe the mean squared error of ?(t) is smaller than that of ? w (t), except for the case when censoring is light. However, when truncation is severe, ? w (t) has the advantage of less bias and the situation can be reversed.  相似文献   

20.
In this paper we consider the Capital Asset Pricing Model under Elliptical (symmetric) Distributions. This class of distributions, which contains the normal distribution, t, contaminated normal and power exponential, among others, offers a more flexible framework for modelling asset prices or returns. In order to analyze the sensibility to possible outliers and/or atypical returns of the maximum likelihood estimators, the local influence method was implemented. The results are illustrated by using a set of shares from companies who trade in the Chilean Stock Market. Our main conclusion is that symmetric distributions having heavier tails than those of the normal distribution, especially the t distribution with small degrees of freedom, show a better fit and allow the reduction of the influence of atypical returns in the maximum likelihood estimators.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号