首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 94 毫秒
1.
Estimation of the population mean based on right censored observations is considered. The naive sample mean will be an inconsistent and asymptotically biased estimator in this case. An estimate suggested in textbooks is to compute the area under a Kaplan–Meier curve. In this note, two more seemingly different approaches are introduced. Students’ reaction to these approaches was very positive in an introductory survival analysis course the author recently taught.  相似文献   

2.
In the frailty Cox model, frequentist approaches often present problems of numerical resolution, convergence, and variance calculation. The Bayesian approach offers an alternative. The goal of this study was to compare, using real (calf gastroenteritis) and simulated data, the results obtained with the MCMC method used in the Bayesian approach versus two frequentist approaches: the Newton–Raphson algorithm to solve a penalized likelihood and the EM algorithm. The results obtained showed that when the number of groups in the population decreases, the Bayesian approach gives a less biased estimation of the frailty variance and of the group fixed effect than the frequentist approaches.  相似文献   

3.
In this paper, some extended Rasch models are analyzed in the presence of longitudinal measurements of a latent variable. Two main approaches, multidimensional and multilevel, are compared: we investigate the different information that can be obtained from the latent variable, and we give advice on the use of the different kinds of models. The multidimensional and multilevel approaches are illustrated with a simulation study and with a longitudinal study on the health-related quality of life in terminal cancer patients.  相似文献   

4.
Students in a calculus-based probability course will often see the expectation formula for nonnegative continuous random variables in terms of the survival function. This alternative expectation formula has a wide spectrum of applications. It is natural to ask whether there is a multivariate version of this formula. This note gives an affirmative answer by establishing such a formula using two different approaches. The two approaches employed in this note correspond to the two approaches for the univariate case. Supplementary materials for this article are available online.  相似文献   

5.
Abstract. Use of auxiliary variables for generating proposal variables within a Metropolis–Hastings setting has been suggested in many different settings. This has in particular been of interest for simulation from complex distributions such as multimodal distributions or in transdimensional approaches. For many of these approaches, the acceptance probabilities that are used turn up somewhat magic and different proofs for their validity have been given in each case. In this article, we will present a general framework for construction of acceptance probabilities in auxiliary variable proposal generation. In addition to showing the similarities between many of the proposed algorithms in the literature, the framework also demonstrates that there is a great flexibility in how to construct acceptance probabilities. With this flexibility, alternative acceptance probabilities are suggested. Some numerical experiments are also reported.  相似文献   

6.
Abstract

In this paper, we present a fractional decomposition of the probability generating function of the innovation process of the first-order non-negative integer-valued autoregressive [INAR(1)] process to obtain the corresponding probability mass function. We also provide a comprehensive review of integer-valued time series models, based on the concept of thinning operators with geometric-type marginals. In particular, we develop two fractional approaches to obtain the distribution of innovation processes of the INAR(1) model and show that the distribution of the innovations sequence has geometric-type distribution. These approaches are discussed in detail and illustrated through a few examples.  相似文献   

7.
Many of the recently developed alternative ecocometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implicit preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

8.
Many of the recently developed alternative econometric approaches to the construction and estimation of life-cycle consistent models using individual data can be viewed as alternative choices for conditioning variables that summarise past decisions and future anticipations. By ingenious choice of this conditioning variable and by exploitation of the duality relationships between the alternative specifications, many currently available micro-data sets can be used for the estimation of life-cycle consistent models. In reviewing the alternative approaches their stochastic properties and implict preference restrictions are highlighted. Indeed, empirical specifications that are parameterised in a form of direct theoretical interest often can be shown to be unnecessarily restrictive while dual representations may provide more flexible econometric models. These results indicate the particular advantages of different types of data in retrieving life-cycle consistent preference parameters and the appropriate, most flexible, econometric approach for each type of data. A methodology for relaxing the intertemporal separability assumption is developed and the advantages and disadvantages of alternative approaches in this framework are considered.  相似文献   

9.
In fitting a generalized linear model, many authors have noticed that data sets can show greater residual variability than predicted under the exponential family. Two main approaches have been used to model this overdispersion. The first approach uses a sampling density which is a conjugate mixture of exponential family distributions. The second uses a quasilikelihood which adds a new scale parameter to the exponential likelihood. The approaches are compared by means of a Bayesian analysis using noninformative priors. In examples, it is indicated that the posterior analysis can be significantly different using the two approaches.  相似文献   

10.
Two methods that are often used to evaluate the run length distribution of quality control charts are the Markov chain and integral equation approaches. Both methods have been used to evaluate the cumulative sum (CUSUM) charts and the exponentially weighted moving average (EWMA) control charts. The Markov chain approach involves "discretiz-ing" the possible values which can be plotted. Using properties of finite Markov chains, expressions for the distribution of the run length, and for the average run length (ARL), can be obtained. For the CUSUM and EWMA charts there exist integral equations whose solution gives the ARL. Approximate methods can then be used to solve the integral equation. In this article we show that if the product midpoint rule is used to approximate the integral in the integral equation, then both approaches yield the same approximations for the ARL. In addition we show that the recursive expressions for the probability functions are the same for the two approaches. These results establish the integral equation approach as preferable whenever an integral equation can be found  相似文献   

11.
In this paper, we restrict attention to the problem of subset selection of normal populations. The approaches and results of some previous comparison studies of subset selection procedures are discussed briefly. And then the result of a new Monte Carlo study comparing the performance of two classical procedures and the Bayes procedure is presented.  相似文献   

12.
In this article, we consider inference about the correlation coefficients of several bivariate normal distributions. We first propose computational approach tests for testing the equality of the correlation coefficients. In fact, these approaches are parametric bootstrap tests, and simulation studies show that they perform very satisfactory, and the actual sizes of these tests are better than other existing approaches. We also present a computational approach test and a parametric bootstrap confidence interval for inference about the parameter of common correlation coefficient. At the end, all the approaches are illustrated using two real examples.  相似文献   

13.
In the econometrics literature, it is standard practice to use the existing instrumental variables as well as generalized method of moments approaches for the estimation of the parameters of a linear dynamic mixed model for panel data. In this paper, we introduce a generalized quasi-likelihood estimation approach that produces estimates with smaller mean squared errors when compared with the aforementioned and other existing approaches.  相似文献   

14.
Many methods have been developed in the literature for regression analysis of current status data with noninformative censoring and also some approaches have been proposed for semiparametric regression analysis of current status data with informative censoring. However, the existing approaches for the latter situation are mainly on specific models such as the proportional hazards model and the additive hazard model. Corresponding to this, in this paper, we consider a general class of semiparametric linear transformation models and develop a sieve maximum likelihood estimation approach for the inference. In the method, the copula model is employed to describe the informative censoring or relationship between the failure time of interest and the censoring time, and Bernstein polynomials are used to approximate the nonparametric functions involved. The asymptotic consistency and normality of the proposed estimators are established, and an extensive simulation study is conducted and indicates that the proposed approach works well for practical situations. In addition, an illustrative example is provided.  相似文献   

15.
In this article, the hypothesis testing and interval estimation for the reliability parameter are considered in balanced and unbalanced one-way random models. The tests and confidence intervals for the reliability parameter are developed using the concepts of generalized p-value and generalized confidence interval. Furthermore, some simulation results are presented to compare the performances between the proposed approach and the existing approach. For balanced models, the simulation results indicate that the proposed approach can provide satisfactory coverage probabilities and performs better than the existing approaches across the wide array of scenarios, especially for small sample sizes. For unbalanced models, the simulation results show that the two proposed approaches perform more satisfactorily than the existing approach in most cases. Finally, the proposed approaches are illustrated using two real examples.  相似文献   

16.
Routine implementation of the Bayesian paradigm requires an efficient approach to the calculation and display of posterior or predictive distributions for given likelihood and prior specifi- cations. In this paper we shall review some of the analytic and numerical approaches currently available, describing in detail a numerical integration strategy based on Gaussian quadrature, and an associated strategy for the reconstruction and display of distributions based on spline techniques.  相似文献   

17.
The evaluation of multi-step-ahead density forecasts is complicated by the serial correlation of the corresponding probability integral transforms. In the literature, three testing approaches can be found that take this problem into account. However, these approaches rely on data-dependent critical values, ignore important information and, therefore lack power, or suffer from size distortions even asymptotically. This article proposes a new testing approach based on raw moments. It is extremely easy to implement, uses standard critical values, can include all moments regarded as important, and has correct asymptotic size. It is found to have good size and power properties in finite samples if it is based on the (standardized) probability integral transforms.  相似文献   

18.
一、引言目前,各大高校面临的一个普遍问题便是如何提高教学质量。怎样根据学生的特点以及学生专业的特点采用教学方法,已成为教育领域的一个新的课题。根据梁美容博士建立的学习方法矩阵[1],学习方法矩阵见表1。学习方法划分为九类,即浅表型学习方法(Surface Approach,简称SA)  相似文献   

19.
Reference‐scaled average bioequivalence (RSABE) approaches for highly variable drugs are based on linearly scaling the bioequivalence limits according to the reference formulation within‐subject variability. RSABE methods have type I error control problems around the value where the limits change from constant to scaled. In all these methods, the probability of type I error has only one absolute maximum at this switching variability value. This allows adjusting the significance level to obtain statistically correct procedures (that is, those in which the probability of type I error remains below the nominal significance level), at the expense of some potential power loss. In this paper, we explore adjustments to the EMA and FDA regulatory RSABE approaches, and to a possible improvement of the original EMA method, designated as HoweEMA. The resulting adjusted methods are completely correct with respect to type I error probability. The power loss is generally small and tends to become irrelevant for moderately large (affordable in real studies) sample sizes.  相似文献   

20.
ABSTRACT

A dual-record system (DRS) (equivalently two sample capture–recapture experiments) model, with time and behavioural response variation, has attracted much attention specifically in the domain of official statistics and epidemiology, as the assumption of list independence often fails. The relevant model suffers from parameter identifiability problem, and suitable Bayesian methodologies could be helpful. In this article, we formulate population size estimation in DRS as a missing data problem and two empirical Bayes approaches are proposed along with the discussion of an existing Bayes treatment. Some features and associated posterior convergence for these methods are mentioned. Investigation through an extensive simulation study finds that our proposed approaches compare favourably with the existing Bayes approach for this complex model depending upon the availability of directional nature of underlying behavioural response effect. A real-data example is given to illustrate these methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号