首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The authors propose graphical and numerical methods for checking the adequacy of the logistic regression model for matched case‐control data. Their approach is based on the cumulative sum of residuals over the covariate or linear predictor. Under the assumed model, the cumulative residual process converges weakly to a centered Gaussian limit whose distribution can be approximated via computer simulation. The observed cumulative residual pattern can then be compared both visually and analytically to a certain number of simulated realizations of the approximate limiting process under the null hypothesis. The proposed techniques allow one to check the functional form of each covariate, the logistic link function as well as the overall model adequacy. The authors assess the performance of the proposed methods through simulation studies and illustrate them using data from a cardiovascular study.  相似文献   

2.
The authors study a varying‐coefficient regression model in which some of the covariates are measured with additive errors. They find that the usual local linear estimator (LLE) of the coefficient functions is biased and that the usual correction for attenuation fails to work. They propose a corrected LLE and show that it is consistent and asymptotically normal, and they also construct a consistent estimator for the model error variance. They then extend the generalized likelihood technique to develop a goodness of fit test for the model. They evaluate these various procedures through simulation studies and use them to analyze data from the Framingham Heart Study.  相似文献   

3.
The authors show how to test the goodness‐of‐fit of a linear regression model when there are missing data in the response variable. Their statistics are based on the L2 distance between nonparametric estimators of the regression function and a ‐consistent estimator of the same function under the parametric model. They obtain the limit distribution of the statistics and check the validity of their bootstrap version. Finally, a simulation study allows them to examine the behaviour of their tests, whether the samples are complete or not.  相似文献   

4.
Any continuous bivariate distribution can be expressed in terms of its margins and a unique copula. In the case of extreme‐value distributions, the copula is characterized by a dependence function while each margin depends on three parameters. The authors propose a Bayesian approach for the simultaneous estimation of the dependence function and the parameters defining the margins. They describe a nonparametric model for the dependence function and a reversible jump Markov chain Monte Carlo algorithm for the computation of the Bayesian estimator. They show through simulations that their estimator has a smaller mean integrated squared error than classical nonparametric estimators, especially in small samples. They illustrate their approach on a hydrological data set.  相似文献   

5.
The authors show how an adjusted pseudo‐empirical likelihood ratio statistic that is asymptotically distributed as a chi‐square random variable can be used to construct confidence intervals for a finite population mean or a finite population distribution function from complex survey samples. They consider both non‐stratified and stratified sampling designs, with or without auxiliary information. They examine the behaviour of estimates of the mean and the distribution function at specific points using simulations calling on the Rao‐Sampford method of unequal probability sampling without replacement. They conclude that the pseudo‐empirical likelihood ratio confidence intervals are superior to those based on the normal approximation, whether in terms of coverage probability, tail error rates or average length of the intervals.  相似文献   

6.
The authors propose a general model for the joint distribution of nominal, ordinal and continuous variables. Their work is motivated by the treatment of various types of data. They show how to construct parameter estimates for their model, based on the maximization of the full likelihood. They provide algorithms to implement it, and present an alternative estimation method based on the pairwise likelihood approach. They also touch upon the issue of statistical inference. They illustrate their methodology using data from a foreign language achievement study.  相似文献   

7.
In non‐randomized biomedical studies using the proportional hazards model, the data often constitute an unrepresentative sample of the underlying target population, which results in biased regression coefficients. The bias can be avoided by weighting included subjects by the inverse of their respective selection probabilities, as proposed by Horvitz & Thompson (1952) and extended to the proportional hazards setting for use in surveys by Binder (1992) and Lin (2000). In practice, the weights are often estimated and must be treated as such in order for the resulting inference to be accurate. The authors propose a two‐stage weighted proportional hazards model in which, at the first stage, weights are estimated through a logistic regression model fitted to a representative sample from the target population. At the second stage, a weighted Cox model is fitted to the biased sample. The authors propose estimators for the regression parameter and cumulative baseline hazard. They derive the asymptotic properties of the parameter estimators, accounting for the difference in the variance introduced by the randomness of the weights. They evaluate the accuracy of the asymptotic approximations in finite samples through simulation. They illustrate their approach in an analysis of renal transplant patients using data obtained from the Scientific Registry of Transplant Recipients  相似文献   

8.
In many experiments, not all explanatory variables can be controlled. When the units arise sequentially, different approaches may be used. The authors study a natural sequential procedure for “marginally restricted” D‐optimal designs. They assume that one set of explanatory variables (x1) is observed sequentially, and that the experimenter responds by choosing an appropriate value of the explanatory variable x2. In order to solve the sequential problem a priori, the authors consider the problem of constructing optimal designs with a prior marginal distribution for x1. This eliminates the influence of units already observed on the next unit to be designed. They give explicit designs for various cases in which the mean response follows a linear regression model; they also consider a case study with a nonlinear logistic response. They find that the optimal strategy often consists of randomizing the assignment of the values of x2.  相似文献   

9.
The authors provide a rigorous large sample theory for linear models whose response variable has been subjected to the Box‐Cox transformation. They provide a continuous asymptotic approximation to the distribution of estimators of natural parameters of the model. They show, in particular, that the maximum likelihood estimator of the ratio of slope to residual standard deviation is consistent and relatively stable. The authors further show the importance for inference of normality of the errors and give tests for normality based on the estimated residuals. For non‐normal errors, they give adjustments to the log‐likelihood and to asymptotic standard errors.  相似文献   

10.
The authors give tests of fit for the hyperbolic distribution, based on the Cramér‐von Mises statistic W2. They consider the general case with four parameters unknown, and some specific cases where one or two parameters are fixed. They give two examples using stock price data.  相似文献   

11.
The authors define a class of “partially linear single‐index” survival models that are more flexible than the classical proportional hazards regression models in their treatment of covariates. The latter enter the proposed model either via a parametric linear form or a nonparametric single‐index form. It is then possible to model both linear and functional effects of covariates on the logarithm of the hazard function and if necessary, to reduce the dimensionality of multiple covariates via the single‐index component. The partially linear hazards model and the single‐index hazards model are special cases of the proposed model. The authors develop a likelihood‐based inference to estimate the model components via an iterative algorithm. They establish an asymptotic distribution theory for the proposed estimators, examine their finite‐sample behaviour through simulation, and use a set of real data to illustrate their approach.  相似文献   

12.
The authors show how to extend univariate mixture autoregressive models to a multivariate time series context. Similar to the univariate case, the multivariate model consists of a mixture of stationary or nonstationary autoregressive components. The authors give the first and second order stationarity conditions for a multivariate case up to order 2. They also derive the second order stationarity condition for the univariate mixture model up to arbitrary order. They describe an EM algorithm for estimation, as well as a diagnostic checking procedure. They study the performance of their method via simulations and include a real application.  相似文献   

13.
In longitudinal studies, observation times are often irregular and subject‐specific. Frequently they are related to the outcome measure or other variables that are associated with the outcome measure but undesirable to condition upon in the model for outcome. Regression analyses that are unadjusted for outcome‐dependent follow‐up then yield biased estimates. The authors propose a class of inverse‐intensity rate‐ratio weighted estimators in generalized linear models that adjust for outcome‐dependent follow‐up. The estimators, based on estimating equations, are very simple and easily computed; they can be used under mixtures of continuous and discrete observation times. The predictors of observation times can be past observed outcomes, cumulative values of outcome‐model covariates and other factors associated with the outcome. The authors validate their approach through simulations and they illustrate it using data from a supported housing program from the US federal government.  相似文献   

14.
The authors present an improved ranked set two‐sample Mann‐Whitney‐Wilcoxon test for a location shift between samples from two distributions F and G. They define a function that measures the amount of information provided by each observation from the two samples, given the actual joint ranking of all the units in a set. This information function is used as a guide for improving the Pitman efficacy of the Mann‐Whitney‐Wilcoxon test. When the underlying distributions are symmetric, observations at their mode(s) must be quantified in order to gain efficiency. Analogous results are provided for asymmetric distributions.  相似文献   

15.
In an affected‐sib‐pair genetic linkage analysis, identical by descent data for affected sib pairs are routinely collected at a large number of markers along chromosomes. Under very general genetic assumptions, the IBD distribution at each marker satisfies the possible triangle constraint. Statistical analysis of IBD data should thus utilize this information to improve efficiency. At the same time, this constraint renders the usual regularity conditions for likelihood‐based statistical methods unsatisfied. In this paper, the authors study the asymptotic properties of the likelihood ratio test (LRT) under the possible triangle constraint. They derive the limiting distribution of the LRT statistic based on data from a single locus. They investigate the precision of the asymptotic distribution and the power of the test by simulation. They also study the test based on the supremum of the LRT statistics over the markers distributed throughout a chromosome. Instead of deriving a limiting distribution for this test, they use a mixture of chi‐squared distributions to approximate its true distribution. Their simulation results show that this approach has desirable simplicity and satisfactory precision.  相似文献   

16.
The authors study the problem of testing whether two populations have the same law by comparing kernel estimators of the two density functions. The proposed test statistic is based on a local empirical likelihood approach. They obtain the asymptotic distribution of the test statistic and propose a bootstrap approximation to calibrate the test. A simulation study is carried out in which the proposed method is compared with two competitors, and a procedure to select the bandwidth parameter is studied. The proposed test can be extended to more than two samples and to multivariate distributions.  相似文献   

17.
The authors consider the empirical likelihood method for the regression model of mean quality‐adjusted lifetime with right censoring. They show that an empirical log‐likelihood ratio for the vector of the regression parameters is asymptotically a weighted sum of independent chi‐squared random variables. They adjust this empirical log‐likelihood ratio so that the limiting distribution is a standard chi‐square and construct corresponding confidence regions. Simulation studies lead them to conclude that empirical likelihood methods outperform the normal approximation methods in terms of coverage probability. They illustrate their methods with a data example from a breast cancer clinical trial study.  相似文献   

18.
The authors propose methods based on the stratified Cox proportional hazards model that account for the fact that the data have been collected according to a complex survey design. The methods they propose are based on the theory of estimating equations in conjunction with empirical process theory. The authors also discuss issues concerning ignorable sampling design, and the use of weighted and unweighted procedures. They illustrate their methodology by an analysis of jobless spells in Statistics Canada's Survey of Labour and Income Dynamics. They discuss briefly problems concerning weighting, model checking, and missing or mismeasured data. They also identify areas for further research.  相似文献   

19.
Abstract: The authors address the problem of estimating an inter‐event distribution on the basis of count data. They derive a nonparametric maximum likelihood estimate of the inter‐event distribution utilizing the EM algorithm both in the case of an ordinary renewal process and in the case of an equilibrium renewal process. In the latter case, the iterative estimation procedure follows the basic scheme proposed by Vardi for estimating an inter‐event distribution on the basis of time‐interval data; it combines the outputs of the E‐step corresponding to the inter‐event distribution and to the length‐biased distribution. The authors also investigate a penalized likelihood approach to provide the proposed estimation procedure with regularization capabilities. They evaluate the practical estimation procedure using simulated count data and apply it to real count data representing the elongation of coffee‐tree leafy axes.  相似文献   

20.
Choulakian, Lockhart & Stephens (1994) proposed Cramér‐von Mises statistics for testing fit to a fully specified discrete distribution. The authors give slightly modified definitions for these statistics and determine their asymptotic behaviour in the case when unknown parameters in the distribution must be estimated from the sample data. They also present two examples of applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号