首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper presents a Markov chain Monte Carlo algorithm for a class of multivariate diffusion models with unobserved paths. This class is of high practical interest as it includes most diffusion driven stochastic volatility models. The algorithm is based on a data augmentation scheme where the paths are treated as missing data. However, unless these paths are transformed so that the dominating measure is independent of any parameters, the algorithm becomes reducible. The methodology developed in Roberts and Stramer [2001a. On inference for partial observed nonlinear diffusion models using the metropolis-hastings algorithm. Biometrika 88(3); 603–621] circumvents the problem for scalar diffusions. We extend this framework to the class of models of this paper by introducing an appropriate reparametrisation of the likelihood that can be used to construct an irreducible data augmentation scheme. Practical implementation issues are considered and the methodology is applied to simulated data from the Heston model.  相似文献   

2.
Testing homogeneity is a fundamental problem in finite mixture models. It has been investigated by many researchers and most of the existing works have focused on the univariate case. In this article, the authors extend the use of the EM‐test for testing homogeneity to multivariate mixture models. They show that the EM‐test statistic asymptotically has the same distribution as a certain transformation of a single multivariate normal vector. On the basis of this result, they suggest a resampling procedure to approximate the P‐value of the EM‐test. Simulation studies show that the EM‐test has accurate type I errors and adequate power, and is more powerful and computationally efficient than the bootstrap likelihood ratio test. Two real data sets are analysed to illustrate the application of our theoretical results. The Canadian Journal of Statistics 39: 218–238; 2011 © 2011 Statistical Society of Canada  相似文献   

3.
Coarse data is a general type of incomplete data that includes grouped data, censored data, and missing data. The likelihood‐based estimation approach with coarse data is challenging because the likelihood function is in integral form. The Monte Carlo EM algorithm of Wei & Tanner [Wei & Tanner (1990). Journal of the American Statistical Association, 85, 699–704] is adapted to compute the maximum likelihood estimator in the presence of coarse data. Stochastic coarse data is also covered and the computation can be implemented using the parametric fractional imputation method proposed by Kim [Kim (2011). Biometrika, 98, 119–132]. Results from a limited simulation study are presented. The proposed method is also applied to the Korean Longitudinal Study of Aging (KLoSA). The Canadian Journal of Statistics 40: 604–618; 2012 © 2012 Statistical Society of Canada  相似文献   

4.
We study moderate deviations for the maximum likelihood estimation of some inhomogeneous diffusions. The moderate deviation principle with explicit rate functions is obtained. Moreover, we apply our result to the parameter estimation in αα-Wiener bridges.  相似文献   

5.
The median is a commonly used parameter to characterize biomarker data. In particular, with two vastly different underlying distributions, comparing medians provides different information than comparing means; however, very few tests for medians are available. We propose a series of two‐sample median‐specific tests using empirical likelihood methodology and investigate their properties. We present the technical details of incorporating the relevant constraints into the empirical likelihood function for in‐depth median testing. An extensive Monte Carlo study shows that the proposed tests have excellent operating characteristics even under unfavourable occasions such as non‐exchangeability under the null hypothesis. We apply the proposed methods to analyze biomarker data from Western blot analysis to compare normal cells with bronchial epithelial cells from a case–control study. The Canadian Journal of Statistics 39: 671–689; 2011. © 2011 Statistical Society of Canada  相似文献   

6.
The normalized maximum likelihood (NML) is a recent penalized likelihood that has properties that justify defining the amount of discrimination information (DI) in the data supporting an alternative hypothesis over a null hypothesis as the logarithm of an NML ratio, namely, the alternative hypothesis NML divided by the null hypothesis NML. The resulting DI, like the Bayes factor but unlike the P‐value, measures the strength of evidence for an alternative hypothesis over a null hypothesis such that the probability of misleading evidence vanishes asymptotically under weak regularity conditions and such that evidence can support a simple null hypothesis. Instead of requiring a prior distribution, the DI satisfies a worst‐case minimax prediction criterion. Replacing a (possibly pseudo‐) likelihood function with its weighted counterpart extends the scope of the DI to models for which the unweighted NML is undefined. The likelihood weights leverage side information, either in data associated with comparisons other than the comparison at hand or in the parameter value of a simple null hypothesis. Two case studies, one involving multiple populations and the other involving multiple biological features, indicate that the DI is robust to the type of side information used when that information is assigned the weight of a single observation. Such robustness suggests that very little adjustment for multiple comparisons is warranted if the sample size is at least moderate. The Canadian Journal of Statistics 39: 610–631; 2011. © 2011 Statistical Society of Canada  相似文献   

7.
Covariate measurement error problems have been extensively studied in the context of right‐censored data but less so for current status data. Motivated by the zebrafish basal cell carcinoma (BCC) study, where the occurrence time of BCC was only known to lie before or after a sacrifice time and where the covariate (Sonic hedgehog expression) was measured with error, the authors describe a semiparametric maximum likelihood method for analyzing current status data with mismeasured covariates under the proportional hazards model. They show that the estimator of the regression coefficient is asymptotically normal and efficient and that the profile likelihood ratio test is asymptotically Chi‐squared. They also provide an easily implemented algorithm for computing the estimators. They evaluate their method through simulation studies, and illustrate it with a real data example. The Canadian Journal of Statistics 39: 73–88; 2011 © 2011 Statistical Society of Canada  相似文献   

8.
The authors derive closed‐form expressions for the full, profile, conditional and modified profile likelihood functions for a class of random growth parameter models they develop as well as Garcia's additive model. These expressions facilitate the determination of parameter estimates for both types of models. The profile, conditional and modified profile likelihood functions are maximized over few parameters to yield a complete set of parameter estimates. In the development of their random growth parameter models the authors specify the drift and diffusion coefficients of the growth parameter process in a natural way which gives interpretive meaning to these coefficients while yielding highly tractable models. They fit several of their random growth parameter models and Garcia's additive model to stock market data, and discuss the results. The Canadian Journal of Statistics 38: 474–487; 2010 © 2010 Statistical Society of Canada  相似文献   

9.
The authors consider the problem of simulating the times of events such as extremes and barrier crossings in diffusion processes. They develop a rejection sampler based on Shepp [Shepp, Journal of Applied Probability 1979; 16:423–427] for simulating an extreme of a Brownian motion and use it in a general recursive scheme for more complex simulations, including simultaneous simulation of the minimum and maximum and application to more general diffusions. They price exotic options that are difficult to price analytically: a knock‐out barrier option with a modified payoff function, a lookback option that includes discounting at the risk‐free interest rate, and a chooser option where the choice is made at the time of a barrier crossing. The Canadian Journal of Statistics 38: 738–755; 2010 © 2010 Statistical Society of Canada  相似文献   

10.
Accurate diagnosis of disease is a critical part of health care. New diagnostic and screening tests must be evaluated based on their abilities to discriminate diseased conditions from non‐diseased conditions. For a continuous‐scale diagnostic test, a popular summary index of the receiver operating characteristic (ROC) curve is the area under the curve (AUC). However, when our focus is on a certain region of false positive rates, we often use the partial AUC instead. In this paper we have derived the asymptotic normal distribution for the non‐parametric estimator of the partial AUC with an explicit variance formula. The empirical likelihood (EL) ratio for the partial AUC is defined and it is shown that its limiting distribution is a scaled chi‐square distribution. Hybrid bootstrap and EL confidence intervals for the partial AUC are proposed by using the newly developed EL theory. We also conduct extensive simulation studies to compare the relative performance of the proposed intervals and existing intervals for the partial AUC. A real example is used to illustrate the application of the recommended intervals. The Canadian Journal of Statistics 39: 17–33; 2011 © 2011 Statistical Society of Canada  相似文献   

11.
Starting from the characterization of extreme‐value copulas based on max‐stability, large‐sample tests of extreme‐value dependence for multivariate copulas are studied. The two key ingredients of the proposed tests are the empirical copula of the data and a multiplier technique for obtaining approximate p‐values for the derived statistics. The asymptotic validity of the multiplier approach is established, and the finite‐sample performance of a large number of candidate test statistics is studied through extensive Monte Carlo experiments for data sets of dimension two to five. In the bivariate case, the rejection rates of the best versions of the tests are compared with those of the test of Ghoudi et al. (1998) recently revisited by Ben Ghorbal et al. (2009). The proposed procedures are illustrated on bivariate financial data and trivariate geological data. The Canadian Journal of Statistics 39: 703–720; 2011. © 2011 Statistical Society of Canada  相似文献   

12.
We propose using the weighted likelihood method to fit a general relative risk regression model for the current status data with missing data as arise, for example, in case‐cohort studies. The missingness probability is either known or can be reasonably estimated. Asymptotic properties of the weighted likelihood estimators are established. For the case of using estimated weights, we construct a general theorem that guarantees the asymptotic normality of the M‐estimator of a finite dimensional parameter in a class of semiparametric models, where the infinite dimensional parameter is allowed to converge at a slower than parametric rate, and some other parameters in the objective function are estimated a priori. The weighted bootstrap method is employed to estimate the variances. Simulations show that the proposed method works well for finite sample sizes. A motivating example of the case‐cohort study from an HIV vaccine trial is used to demonstrate the proposed method. The Canadian Journal of Statistics 39: 557–577; 2011. © 2011 Statistical Society of Canada  相似文献   

13.
In this article, we develop regression models with cross‐classified responses. Conditional independence structures can be explored/exploited through the selective inclusion/exclusion of terms in a certain functional ANOVA decomposition, and the estimation is done nonparametrically via the penalized likelihood method. A cohort of computational and data analytical tools are presented, which include cross‐validation for smoothing parameter selection, Kullback–Leibler projection for model selection, and Bayesian confidence intervals for odds ratios. Random effects are introduced to model possible correlations such as those found in longitudinal and clustered data. Empirical performances of the methods are explored in simulation studies of limited scales, and a real data example is presented using some eyetracking data from linguistic studies. The techniques are implemented in a suite of R functions, whose usage is briefly described in the appendix. The Canadian Journal of Statistics 39: 591–609; 2011. © 2011 Statistical Society of Canada  相似文献   

14.
This paper discusses multivariate interval‐censored failure time data observed when several correlated survival times of interest exist and only interval censoring is available for each survival time. Such data occur in many fields, for instance, studies of the development of physical symptoms or diseases in several organ systems. A marginal inference approach was used to create a linear transformation model and applied to bivariate interval‐censored data arising from a diabetic retinopathy study and an AIDS study. The results of simulation studies that were conducted to evaluate the performance of the presented approach suggest that it performs well. The Canadian Journal of Statistics 41: 275–290; 2013 © 2013 Statistical Society of Canada  相似文献   

15.
We study estimation and feature selection problems in mixture‐of‐experts models. An $l_2$ ‐penalized maximum likelihood estimator is proposed as an alternative to the ordinary maximum likelihood estimator. The estimator is particularly advantageous when fitting a mixture‐of‐experts model to data with many correlated features. It is shown that the proposed estimator is root‐$n$ consistent, and simulations show its superior finite sample behaviour compared to that of the maximum likelihood estimator. For feature selection, two extra penalty functions are applied to the $l_2$ ‐penalized log‐likelihood function. The proposed feature selection method is computationally much more efficient than the popular all‐subset selection methods. Theoretically it is shown that the method is consistent in feature selection, and simulations support our theoretical results. A real‐data example is presented to demonstrate the method. The Canadian Journal of Statistics 38: 519–539; 2010 © 2010 Statistical Society of Canada  相似文献   

16.
Generalized linear mixed models (GLMMs) are often used for analyzing cluster correlated data, including longitudinal data and repeated measurements. Full unrestricted maximum likelihood (ML) approaches for inference on both fixed‐and random‐effects parameters in GLMMs have been extensively studied in the literature. However, parameter orderings or constraints may occur naturally in practice, and in such cases, the efficiency of a statistical method is improved by incorporating the parameter constraints into the ML estimation and hypothesis testing. In this paper, inference for GLMMs under linear inequality constraints is considered. The asymptotic properties of the constrained ML estimators and constrained likelihood ratio tests for GLMMs have been studied. Simulations investigated the empirical properties of the constrained ML estimators, compared to their unrestricted counterparts. An application to a recent survey on Canadian youth smoking patterns is also presented. As these survey data exhibit natural parameter orderings, a constrained GLMM has been considered for data analysis. The Canadian Journal of Statistics 40: 243–258; 2012 © 2012 Crown in the right of Canada  相似文献   

17.
Lachenbruch ( 1976 , 2001 ) introduced two‐part tests for comparison of two means in zero‐inflated continuous data. We are extending this approach and compare k independent distributions (by comparing their means, either overall or the departure from equal proportion of zeros and equal means of nonzero values) by introducing two tests: a two‐part Wald test and a two‐part likelihood ratio test. If the continuous part of the distributions is lognormal then the proposed two test statistics have asymptotically chi‐square distribution with $2(k-1)$ degrees of freedom. A simulation study was conducted to compare the performance of the proposed tests with several well‐known tests such as ANOVA, Welch ( 1951 ), Brown & Forsythe ( 1974 ), Kruskal–Wallis, and one‐part Wald test proposed by Tu & Zhou ( 1999 ). Results indicate that the proposed tests keep the nominal type I error and have consistently best power among all tests being compared. An application to rainfall data is provided as an example. The Canadian Journal of Statistics 39: 690–702; 2011. © 2011 Statistical Society of Canada  相似文献   

18.
This article concerns the variance estimation in the central limit theorem for finite recurrent Markov chains. The associated variance is calculated in terms of the transition matrix of the Markov chain. We prove the equivalence of different matrix forms representing this variance. The maximum likelihood estimator for this variance is constructed and it is proved that it is strongly consistent and asymptotically normal. The main part of our analysis consists in presenting closed matrix forms for this new variance. Additionally, we prove the asymptotic equivalence between the empirical and the maximum likelihood estimation (MLE) for the stationary distribution.  相似文献   

19.
Low income proportion is an important index in comparisons of poverty in countries around the world. The stability of a society depends heavily on this index. An accurate and reliable estimation of this index plays an important role for government's economic policies. In this paper, the authors study empirical likelihood‐based inferences for a low income proportion under the simple random sampling and stratified random sampling designs. It is shown that the limiting distributions of the empirical likelihood ratios for the low income proportion are the scaled chi‐square distributions. The authors propose various empirical likelihood‐based confidence intervals for the low income proportion. Extensive simulation studies are conducted to evaluate the relative performance of the normal approximation‐based interval, bootstrap‐based intervals, and the empirical likelihood‐based intervals. The proposed methods are also applied to analyzing a real economic survey income dataset. The Canadian Journal of Statistics 39: 1–16; 2011 ©2011 Statistical Society of Canada  相似文献   

20.
The authors propose a profile likelihood approach to linear clustering which explores potential linear clusters in a data set. For each linear cluster, an errors‐in‐variables model is assumed. The optimization of the derived profile likelihood can be achieved by an EM algorithm. Its asymptotic properties and its relationships with several existing clustering methods are discussed. Methods to determine the number of components in a data set are adapted to this linear clustering setting. Several simulated and real data sets are analyzed for comparison and illustration purposes. The Canadian Journal of Statistics 38: 716–737; 2010 © 2010 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号