首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, three competing survival function estimators are compared under the assumptions of the so-called Koziol– Green model, which is a simple model of informative random censoring. It is shown that the model specific estimators of Ebrahimi and Abdushukurov, Cheng, and Lin are asymptotically equivalent. Further, exact expressions for the (noncentral) moments of these estimators are given, and their biases are analytically compared with the bias of the familiar Kaplan–Meier estimator. Finally, MSE comparisons of the three estimators are given for some selected rates of censoring.  相似文献   

2.
Mark–recapture experiments involve capturing individuals from populations of interest, marking and releasing them at an initial sample time, and recapturing individuals from the same populations on subsequent occasions. The Jolly–Seber model is widely used in open-population models since it can estimate important parameters such as population size, recruitment, and survival. However, one of the Jolly–Seber model assumptions that can be easily violated is that of no tag loss. Cowen and Schwarz [L. Cowen, C.J. Schwarz, The Jolly–Seber model with tag loss, Biometrics 62 (2006) 677–705] developed the Jolly–Seber-Tag-Loss (JSTL) model to avoid this violation; this model was extended to deal with group heterogeneity by Gonzalez and Cowen [S. Gonzalez, L. Cowen, The Jolly–Seber-tag-loss model with group heterogeneity, The Arbutus Review 1 (2010) 30–42]. In this paper, we studied the group heterogeneous JSTL (GJSTL) model through simulations and found that as sample size and fraction of double tagged individuals increased, bias of parameter estimates is reduced and precision increased. We applied this model to a study of rock lobsters Jasus edwardsii in Tasmania, Australia.  相似文献   

3.
In this article, we introduce a new class of estimators called the sK type principal components estimators to combat multicollinearity, which include the principal components regression (PCR) estimator, the rk estimator and the sK estimator as special cases. Necessary and sufficient conditions for the superiority of the new estimator over the PCR estimator, the rk estimator and the sK estimator are derived in the sense of the mean squared error matrix criterion. A Monte Carlo simulation study and a numerical example are given to illustrate the performance of the proposed estimator.  相似文献   

4.
This article presents a new class of realized stochastic volatility model based on realized volatilities and returns jointly. We generalize the traditionally used logarithm transformation of realized volatility to the Box–Cox transformation, a more flexible parametric family of transformations. A two-step maximum likelihood estimation procedure is introduced to estimate this model on the basis of Koopman and Scharth (2013 Koopman, S.J., Scharth, M. (2013), The Analysis of Stochastic Volatility in the Presence of Daily Realised Measures, Journal of Financial Econometrics, 11, 76115.[Crossref], [Web of Science ®] [Google Scholar]). Simulation results show that the two-step estimator performs well, and the misspecified log transformation may lead to inaccurate parameter estimation and certain excessive skewness and kurtosis. Finally, an empirical investigation on realized volatility measures and daily returns is carried out for several stock indices.  相似文献   

5.
This article is concerned with efficient estimation in a semiparametric model. We consider pseudo maximum likelihood estimation and prove that the proposed estimator is asymptotically efficient in the sense of Cramér; that is, the estimator has the smallest mean squared error.  相似文献   

6.
In this paper, we compare two estimators, the RLE (restricted Liu estimator) and the RLSE (restricted least squares estimator) of parameters in linear models under Gauss–Markov models. Using generalized inverse of matrices, we found some equivalency conditions for the superiority of the RLE with respect to the MSE criterion.  相似文献   

7.
8.
A goodness-of-fit test for the Gumbel distribution is proposed. This test is based on the Kullback–Leibler discrimination information methodology proposed by Song (2002 Song , K. S. ( 2002 ). Goodness-of-fit tests based on Kullback–Leibler discrimination information . IEEE Trans. Inform. Theor. 48 : 11031117 .[Crossref], [Web of Science ®] [Google Scholar]). The critical values of the test were obtained by using Monte Carlo simulation for small sample sizes and different levels of significance. The proposed test is compared with the tests developed by Stephens (1977 Stephens , M. ( 1977 ). Goodness-of-fit tests for the extreme value distribution . Biometrika 65 : 730737 . [Google Scholar]), Chandra et al. (1981 Chandra , M. , Singpurwalla , N. D. , Stephens , M. A. ( 1981 ). Kolmogorov statistics for tests of fit for the extreme value and Weibull distributions . J. Amer. Statist. Assoc. 74 : 729735 . [Google Scholar]), and the test given by Kinnison (1989 Kinnison , R. (1989). Correlation coefficient goodness of fit test for the extreme value distribution. Amer. Statistician 43:98100.[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) in terms of their power by considering various alternative distributions. Simulation results show that the Kullback–Leibler information test has higher power than some of the studied tests.  相似文献   

9.
The quadratic dose–response model is often used in radiobiology studies. Since the model is nonlinear, the least squares estimators of the parameters of the model are determined by an iterative procedure. We give a simple closed form approximation for estimating the parameters of the model.  相似文献   

10.
In this article, we consider a simple transient queuing system, i.e., a linear birth process with immigration in the presence of twin births. We find the differential-difference equation and also the probability-generating function (p.g.f.) for this process. Again, we generalize it into a linear birth process with immigration in the presence of both single birth or twin births and again for the case of multiple births. From the p.g.f. of linear birth process with immigration in the presence of twin births, we find some particular transient queuing processes like linear birth process with twin births and simple immigration process. Direct derivations of mean and variance of these processes are also discussed without using the generating functions.  相似文献   

11.
The Peña–Box model is a type of dynamic factor model whose factors try to capture the time-effect movements of a multiple time series. The Peña–Box model can be expressed as a vector autoregressive (VAR) model with constraints. This article derives the maximum likelihood estimates and the likelihood ratio test of the VAR model for Gaussian processes. Then a test statistic constructed by canonical correlation coefficients is presented and adjusted for conditional heteroscedasticity. Simulations confirm the validity of adjustments for conditional heteroscedasticity, and show that the proposed statistics perform better than the statistics used in the existing literature.  相似文献   

12.
We consider the prediction of new observations in a general Gauss–Markov model. We state the fundamental equations of the best linear unbiased prediction, BLUP, and consider some properties of the BLUP. Particularly, we focus on such linear statistics, which preserve enough information for obtaining the BLUP of new observations as a linear function of them. We call such statistics linearly prediction sufficient for new observations, and introduce some equivalent characterizations for this new concept.  相似文献   

13.
Abstract

A key question for understanding the cross-section of expected returns of equities is the following: which factors, from a given collection of factors, are risk factors, equivalently, which factors are in the stochastic discount factor (SDF)? Though the SDF is unobserved, assumptions about which factors (from the available set of factors) are in the SDF restricts the joint distribution of factors in specific ways, as a consequence of the economic theory of asset pricing. A different starting collection of factors that go into the SDF leads to a different set of restrictions on the joint distribution of factors. The conditional distribution of equity returns has the same restricted form, regardless of what is assumed about the factors in the SDF, as long as the factors are traded, and hence the distribution of asset returns is irrelevant for isolating the risk-factors. The restricted factors models are distinct (nonnested) and do not arise by omitting or including a variable from a full model, thus precluding analysis by standard statistical variable selection methods, such as those based on the lasso and its variants. Instead, we develop what we call a Bayesian model scan strategy in which each factor is allowed to enter or not enter the SDF and the resulting restricted models (of which there are 114,674 in our empirical study) are simultaneously confronted with the data. We use a Student-t distribution for the factors, and model-specific independent Student-t distribution for the location parameters, a training sample to fix prior locations, and a creative way to arrive at the joint distribution of several other model-specific parameters from a single prior distribution. This allows our method to be essentially a scaleable and tuned-black-box method that can be applied across our large model space with little to no user-intervention. The model marginal likelihoods, and implied posterior model probabilities, are compared with the prior probability of 1/114,674 of each model to find the best-supported model, and thus the factors most likely to be in the SDF. We provide detailed simulation evidence about the high finite-sample accuracy of the method. Our empirical study with 13 leading factors reveals that the highest marginal likelihood model is a Student-t distributed factor model with 5 degrees of freedom and 8 risk factors.  相似文献   

14.
This article presents the general analysis of finite high-dimensional integrals using the Importance Sampling (IS) in aim to the parameter estimation of Taylor’s stochastic volatility (SV) model. After we proceed to make an alternative derivation for Sequential Importance Sampling (SIS) in previous literatures, we propose a new approach to select the optimal parameters of sampler, which is called as Universal Importance Sampling (UIS). UIS minimizes the Monte Carlo variance and numerically performs at least the same accurately as the SIS algorithm, but the computational efficiency get greatly improved. We apply both methods and investigate the SV model on the data, then make comparisons of the results.  相似文献   

15.
In regression analysis, to overcome the problem of multicollinearity, the r ? k class estimator is proposed as an alternative to the ordinary least squares estimator which is a general estimator including the ordinary ridge regression estimator, the principal components regression estimator and the ordinary least squares estimator. In this article, we derive the necessary and sufficient conditions for the superiority of the r ? k class estimator over each of these estimators under the Mahalanobis loss function by the average loss criterion. Then, we compare these estimators with each other using the same criterion. Also, we suggest to test to verify if these conditions are indeed satisfied. Finally, a numerical example and a Monte Carlo simulation are done to illustrate the theoretical results.  相似文献   

16.
17.
Many analyses in the epidemiological and the prognostic studies and in the studies of event history data require methods that allow for unobserved covariates or “frailties”. We consider the shared frailty model in the framework of parametric proportional hazard model. There are certain assumptions about the distribution of frailty and baseline distribution. The exponential distribution is the commonly used distribution for analyzing lifetime data. In this paper, we consider shared gamma frailty model with bivariate exponential of Marshall and Olkin (1967 Marshall, A.W., Olkin, I. (1967). A multivariate exponential distribution. J. Am. Stat. Assoc. 62:3044.[Taylor & Francis Online], [Web of Science ®] [Google Scholar]) distribution as baseline hazard for bivariate survival times. We solve the inferential problem in a Bayesian framework with the help of a comprehensive simulation study and real data example. We fit the model to the real-life bivariate survival data set of diabetic retinopathy data. We introduce Bayesian estimation procedure using Markov Chain Monte Carlo (MCMC) technique to estimate the parameters involved in the proposed model and then compare the true values of the parameters with the estimated values for different sample sizes.  相似文献   

18.
This paper presents some considerations about the numerical procedures for generating D–optimal design in a finite design space. The influence of starting procedures and the finite set of points on the design efficiency is considered. Some modifications of the existing procedures for D–optimal designs generation are described. It is shown that for large number of factors the sequential procedures are more appropriate than the nonsequential ones  相似文献   

19.
A compound decision problem with component decision problem being the classification of a random sample as having come from one of the finite number of univariate populations is investigated. The Bayesian approach is discussed. A distribution–free decision rule is presented which has asymptotic risk equal to zero. The asymptotic efficiencies of these rules are discussed.

The results of a compter simulation are presented which compares the Bayes rule to the distribution–free rule under the assumption of normality. It is found that the distribution–free rule can be recommended in situations where certain key lo cation parameters are not known precisely and/or when certain distributional assumptions are not satisfied.  相似文献   

20.
It is assumed that the logs of the time to failure in a life test follow a normal distribution. If the test is terminated after r of a sample of n items fail, the test is said to be censored. If the sample size is small and censoring severe, the usual maximum likelihood estimator of a is downwardly biased. Monte Carlo techniques and regression analysis were used to develop an empirical correction factor. Applying the correction factor to the maximum likelihood estimator yields an unbiased estimate of σ.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号