首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
The authors propose graphical and numerical methods for checking the adequacy of the logistic regression model for matched case‐control data. Their approach is based on the cumulative sum of residuals over the covariate or linear predictor. Under the assumed model, the cumulative residual process converges weakly to a centered Gaussian limit whose distribution can be approximated via computer simulation. The observed cumulative residual pattern can then be compared both visually and analytically to a certain number of simulated realizations of the approximate limiting process under the null hypothesis. The proposed techniques allow one to check the functional form of each covariate, the logistic link function as well as the overall model adequacy. The authors assess the performance of the proposed methods through simulation studies and illustrate them using data from a cardiovascular study.  相似文献   

2.
Longitudinal data often contain missing observations, and it is in general difficult to justify particular missing data mechanisms, whether random or not, that may be hard to distinguish. The authors describe a likelihood‐based approach to estimating both the mean response and association parameters for longitudinal binary data with drop‐outs. They specify marginal and dependence structures as regression models which link the responses to the covariates. They illustrate their approach using a data set from the Waterloo Smoking Prevention Project They also report the results of simulation studies carried out to assess the performance of their technique under various circumstances.  相似文献   

3.
The authors define a class of “partially linear single‐index” survival models that are more flexible than the classical proportional hazards regression models in their treatment of covariates. The latter enter the proposed model either via a parametric linear form or a nonparametric single‐index form. It is then possible to model both linear and functional effects of covariates on the logarithm of the hazard function and if necessary, to reduce the dimensionality of multiple covariates via the single‐index component. The partially linear hazards model and the single‐index hazards model are special cases of the proposed model. The authors develop a likelihood‐based inference to estimate the model components via an iterative algorithm. They establish an asymptotic distribution theory for the proposed estimators, examine their finite‐sample behaviour through simulation, and use a set of real data to illustrate their approach.  相似文献   

4.
Any continuous bivariate distribution can be expressed in terms of its margins and a unique copula. In the case of extreme‐value distributions, the copula is characterized by a dependence function while each margin depends on three parameters. The authors propose a Bayesian approach for the simultaneous estimation of the dependence function and the parameters defining the margins. They describe a nonparametric model for the dependence function and a reversible jump Markov chain Monte Carlo algorithm for the computation of the Bayesian estimator. They show through simulations that their estimator has a smaller mean integrated squared error than classical nonparametric estimators, especially in small samples. They illustrate their approach on a hydrological data set.  相似文献   

5.
In longitudinal studies, observation times are often irregular and subject‐specific. Frequently they are related to the outcome measure or other variables that are associated with the outcome measure but undesirable to condition upon in the model for outcome. Regression analyses that are unadjusted for outcome‐dependent follow‐up then yield biased estimates. The authors propose a class of inverse‐intensity rate‐ratio weighted estimators in generalized linear models that adjust for outcome‐dependent follow‐up. The estimators, based on estimating equations, are very simple and easily computed; they can be used under mixtures of continuous and discrete observation times. The predictors of observation times can be past observed outcomes, cumulative values of outcome‐model covariates and other factors associated with the outcome. The authors validate their approach through simulations and they illustrate it using data from a supported housing program from the US federal government.  相似文献   

6.
The authors propose a Bayesian decision‐theoretic framework justifying randomization in clinical trials. Noting that the decision maker is often unable or unwilling to specify a unique utility function, they develop a sequential myopic design that includes randomization justified by the consideration of a set of utility functions. Randomization is introduced over all nondominated treatments, allowing for interim removal of treatments and early stopping. The authors illustrate their approach in the context of a study to find the optimal dose of pegylated interferon for platinum resistant ovarian cancer. They also develop an algorithm to implement their methodology in a phase II clinical trial comparing several competing experimental treatments.  相似文献   

7.
The authors propose pseudo‐likelihood ratio tests for selecting semiparametric multivariate copula models in which the marginal distributions are unspecified, but the copula function is parameterized and can be misspecified. For the comparison of two models, the tests differ depending on whether the two copulas are generalized nonnested or generalized nested. For more than two models, the procedure is built on the reality check test of White (2000). Unlike White (2000), however, the test statistic is automatically standardized for generalized nonnested models (with the benchmark) and ignores generalized nested models asymptotically. The authors illustrate their approach with American insurance claim data.  相似文献   

8.
Abstract. A non‐parametric rank‐based test of exchangeability for bivariate extreme‐value copulas is first proposed. The two key ingredients of the suggested approach are the non‐parametric rank‐based estimators of the Pickands dependence function recently studied by Genest and Segers, and a multiplier technique for obtaining approximate p‐values for the derived statistics. The proposed approach is then extended to left‐tail decreasing dependence structures that are not necessarily extreme‐value copulas. Large‐scale Monte Carlo experiments are used to investigate the level and power of the various versions of the test and show that the proposed procedure can be substantially more powerful than tests of exchangeability derived directly from the empirical copula. The approach is illustrated on well‐known financial data.  相似文献   

9.
The authors present an improved ranked set two‐sample Mann‐Whitney‐Wilcoxon test for a location shift between samples from two distributions F and G. They define a function that measures the amount of information provided by each observation from the two samples, given the actual joint ranking of all the units in a set. This information function is used as a guide for improving the Pitman efficacy of the Mann‐Whitney‐Wilcoxon test. When the underlying distributions are symmetric, observations at their mode(s) must be quantified in order to gain efficiency. Analogous results are provided for asymmetric distributions.  相似文献   

10.
Skew‐symmetric models offer a very flexible class of distributions for modelling data. These distributions can also be viewed as selection models for the symmetric component of the specified skew‐symmetric distribution. The estimation of the location and scale parameters corresponding to the symmetric component is considered here, with the symmetric component known. Emphasis is placed on using the empirical characteristic function to estimate these parameters. This is made possible by an invariance property of the skew‐symmetric family of distributions, namely that even transformations of random variables that are skew‐symmetric have a distribution only depending on the symmetric density. A distance metric between the real components of the empirical and true characteristic functions is minimized to obtain the estimators. The method is semiparametric, in that the symmetric component is specified, but the skewing function is assumed unknown. Furthermore, the methodology is extended to hypothesis testing. Two tests for a null hypothesis of specific parameter values are considered, as well as a test for the hypothesis that the symmetric component has a specific parametric form. A resampling algorithm is described for practical implementation of these tests. The outcomes of various numerical experiments are presented.  相似文献   

11.
The authors propose a new type of scan statistic to test for the presence of space‐time clusters in point processes data, when the goal is to identify and evaluate the statistical significance of localized clusters. Their method is based only on point patterns for cases; it does not require any specific knowledge of the underlying population. The authors propose to scan the three‐dimensional space with a score test statistic under the null hypothesis that the underlying point process is an inhomogeneous Poisson point process with space and time separable intensity. The alternative is that there are one or more localized space‐time clusters. Their method has been implemented in a computationally efficient way so that it can be applied routinely. They illustrate their method with space‐time crime data from Belo Horizonte, a Brazilian city, in addition to presenting a Monte Carlo study to analyze the power of their new test.  相似文献   

12.
The authors show how an adjusted pseudo‐empirical likelihood ratio statistic that is asymptotically distributed as a chi‐square random variable can be used to construct confidence intervals for a finite population mean or a finite population distribution function from complex survey samples. They consider both non‐stratified and stratified sampling designs, with or without auxiliary information. They examine the behaviour of estimates of the mean and the distribution function at specific points using simulations calling on the Rao‐Sampford method of unequal probability sampling without replacement. They conclude that the pseudo‐empirical likelihood ratio confidence intervals are superior to those based on the normal approximation, whether in terms of coverage probability, tail error rates or average length of the intervals.  相似文献   

13.
To analyze bivariate time‐to‐event data from matched or naturally paired study designs, researchers frequently use a random effect called frailty to model the dependence between within‐pair response measurements. The authors propose a computational framework for fitting dependent bivariate time‐to‐event data that combines frailty distributions and accelerated life regression models. In this framework users can choose from several parametric options for frailties, as well as the conditional distributions for within‐pair responses. The authors illustrate the flexibility that their framework represents using paired data from a study of laser photocoagulation therapy for retinopathy in diabetic patients.  相似文献   

14.
It is important to study historical temperature time series prior to the industrial revolution so that one can view the current global warming trend from a long‐term historical perspective. Because there are no instrumental records of such historical temperature data, climatologists have been interested in reconstructing historical temperatures using various proxy time series. In this paper, the authors examine a state‐space model approach for historical temperature reconstruction which not only makes use of the proxy data but also information on external forcings. A challenge in the implementation of this approach is the estimation of the parameters in the state‐space model. The authors developed two maximum likelihood methods for parameter estimation and studied the efficiency and asymptotic properties of the associated estimators through a combination of theoretical and numerical investigations. The Canadian Journal of Statistics 38: 488–505; 2010 © 2010 Crown in the right of Canada  相似文献   

15.
The authors consider a robust linear discriminant function based on high breakdown location and covariance matrix estimators. They derive influence functions for the estimators of the parameters of the discriminant function and for the associated classification error. The most B‐robust estimator is determined within the class of multivariate S‐estimators. This estimator, which minimizes the maximal influence that an outlier can have on the classification error, is also the most B‐robust location S‐estimator. A comparison of the most B‐robust estimator with the more familiar biweight S‐estimator is made.  相似文献   

16.
Abstract. We propose a spline‐based semiparametric maximum likelihood approach to analysing the Cox model with interval‐censored data. With this approach, the baseline cumulative hazard function is approximated by a monotone B‐spline function. We extend the generalized Rosen algorithm to compute the maximum likelihood estimate. We show that the estimator of the regression parameter is asymptotically normal and semiparametrically efficient, although the estimator of the baseline cumulative hazard function converges at a rate slower than root‐n. We also develop an easy‐to‐implement method for consistently estimating the standard error of the estimated regression parameter, which facilitates the proposed inference procedure for the Cox model with interval‐censored data. The proposed method is evaluated by simulation studies regarding its finite sample performance and is illustrated using data from a breast cosmesis study.  相似文献   

17.
The authors propose a procedure for determining the unknown number of components in mixtures by generalizing a Bayesian testing method proposed by Mengersen & Robert (1996). The testing criterion they propose involves a Kullback‐Leibler distance, which may be weighted or not. They give explicit formulas for the weighted distance for a number of mixture distributions and propose a stepwise testing procedure to select the minimum number of components adequate for the data. Their procedure, which is implemented using the BUGS software, exploits a fast collapsing approach which accelerates the search for the minimum number of components by avoiding full refitting at each step. The performance of their method is compared, using both distances, to the Bayes factor approach.  相似文献   

18.
The authors extend the classical Cormack‐Jolly‐Seber mark‐recapture model to account for both temporal and spatial movement through a series of markers (e.g., dams). Survival rates are modeled as a function of (possibly) unobserved travel times. Because of the complex nature of the likelihood, they use a Bayesian approach based on the complete data likelihood, and integrate the posterior through Markov chain Monte Carlo methods. They test the model through simulations and apply it also to actual salmon data arising from the Columbia river system. The methodology was developed for use by the Pacific Ocean Shelf Tracking (POST) project.  相似文献   

19.
Abstract. We consider a general non‐parametric regression model, where the distribution of the error, given the covariate, is modelled by a conditional distribution function. For the estimation, a kernel approach as well as the (kernel based) empirical likelihood method are discussed. The latter method allows for incorporation of additional information on the error distribution into the estimation. We show weak convergence of the corresponding empirical processes to Gaussian processes and compare both approaches in asymptotic theory and by means of a simulation study.  相似文献   

20.
Abstract. Goodness‐of‐fit tests are proposed for the skew‐normal law in arbitrary dimension. In the bivariate case the proposed tests utilize the fact that the moment‐generating function of the skew‐normal variable is quite simple and satisfies a partial differential equation of the first order. This differential equation is estimated from the sample and the test statistic is constructed as an L 2 ‐type distance measure incorporating this estimate. Extension of the procedure to dimension greater than two is suggested whereas an effective bootstrap procedure is used to study the behaviour of the new method with real and simulated data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号