首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The cumulative incidence function provides intuitive summary information about competing risks data. Via a mixture decomposition of this function, Chang and Wang (Statist. Sinca 19:391–408, 2009) study how covariates affect the cumulative incidence probability of a particular failure type at a chosen time point. Without specifying the corresponding failure time distribution, they proposed two estimators and derived their large sample properties. The first estimator utilized the technique of weighting to adjust for the censoring bias, and can be considered as an extension of Fine’s method (J R Stat Soc Ser B 61: 817–830, 1999). The second used imputation and extends the idea of Wang (J R Stat Soc Ser B 65: 921–935, 2003) from a nonparametric setting to the current regression framework. In this article, when covariates take only discrete values, we extend both approaches of Chang and Wang (Statist Sinca 19:391–408, 2009) by allowing left truncation. Large sample properties of the proposed estimators are derived, and their finite sample performance is investigated through a simulation study. We also apply our methods to heart transplant survival data.  相似文献   

2.
The subject of the present study is to analyze how accurately an elaborated price jump detection methodology by Barndorff-Nielsen and Shephard (J. Financ. Econom. 2:1–37, 2004a; 4:1–30, 2006) applies to financial time series characterized by less frequent trading. In this context, it is of primary interest to understand the impact of infrequent trading on two test statistics, applicable to disentangle contributions from price jumps to realized variance. In a simulation study, evidence is found that infrequent trading induces a sizable distortion of the test statistics towards overrejection. A new empirical investigation using high frequency information of the most heavily traded electricity forward contract of the Nord Pool Energy Exchange corroborates the evidence of the simulation. In line with the theory, a “zero-return-adjusted estimation” is introduced to reduce the bias in the test statistics, both illustrated in the simulation study and empirical case.  相似文献   

3.
While most of the literature on measurement error focuses on additive measurement error, we consider in this paper the multiplicative case. We apply the Simulation Extrapolation method (SIMEX)—a procedure which was originally proposed by Cook and Stefanski (J. Am. Stat. Assoc. 89:1314–1328, 1994) in order to correct the bias due to additive measurement error—to the case where data are perturbed by multiplicative noise and present several approaches to account for multiplicative noise in the SIMEX procedure. Furthermore, we analyze how well these approaches reduce the bias caused by multiplicative perturbation. Using a binary probit model, we produce Monte Carlo evidence on how the reduction of data quality can be minimized. For helpful comments, we would like to thank Helmut Küchenhoff, Winfried Pohlmeier, and Gerd Ronning. Sandra Nolte gratefully acknowledges financial support by the DFG. Elena Biewen and Martin Rosemann gratefully acknowledge the financial support by the Federal Ministry of Education and Research (BMBF). The usual disclaimer applies.  相似文献   

4.
We analyse the finite-sample behaviour of two second-order bias-corrected alternatives to the maximum-likelihood estimator of the parameters in a multivariate normal regression model with general parametrization proposed by Patriota and Lemonte [A.G. Patriota and A.J. Lemonte, Bias correction in a multivariate regression model with genereal parameterization, Stat. Prob. Lett. 79 (2009), pp. 1655–1662]. The two finite-sample corrections we consider are the conventional second-order bias-corrected estimator and the bootstrap bias correction. We present the numerical results comparing the performance of these estimators. Our results reveal that analytical bias correction outperforms numerical bias corrections obtained from bootstrapping schemes.  相似文献   

5.
A test for the hypothesis of uniformity on a support S⊂ℝ d is proposed. It is based on the use of multivariate spacings as those studied in Janson (Ann. Probab. 15:274–280, 1987). As a novel aspect, this test can be adapted to the case that the support S is unknown, provided that it fulfils the shape condition of λ-convexity. The consistency properties of this test are analyzed and its performance is checked through a small simulation study. The numerical problems involved in the practical calculation of the maximal spacing (which is required to obtain the test statistic) are also discussed in some detail.  相似文献   

6.
We propose a test for the equality of the autocovariance functions of two independent and stationary time series. The test statistic is a quadratic form in the vector of differences of the first J + 1 autocovariances. Its asymptotic distribution is derived under the null hypothesis, and the finite-sample properties of the test, namely the bias and the power, are investigated by Monte Carlo methods. A by-product of this study is a new estimator of the covariance between two sample autocovariances which provides a positive definite covariance matrix. We establish the convergence of this estimator in the L1 norm.  相似文献   

7.
ABSTRACT

In this article we derive finite-sample corrections in matrix notation for likelihood ratio and score statistics in extreme-value linear regression models. We consider three corrected score tests that perform better than the usual score test. We also derive general formulae for second-order biases of maximum likelihood estimates of the linear parameters. Some simulations are performed to compare the likelihood ratio and score statistics with their modified versions and to illustrate the bias correction.  相似文献   

8.
In randomized clinical trials, we are often concerned with comparing two-sample survival data. Although the log-rank test is usually suitable for this purpose, it may result in substantial power loss when the two groups have nonproportional hazards. In a more general class of survival models of Yang and Prentice (Biometrika 92:1–17, 2005), which includes the log-rank test as a special case, we improve model efficiency by incorporating auxiliary covariates that are correlated with the survival times. In a model-free form, we augment the estimating equation with auxiliary covariates, and establish the efficiency improvement using the semiparametric theories in Zhang et al. (Biometrics 64:707–715, 2008) and Lu and Tsiatis (Biometrics, 95:674–679, 2008). Under minimal assumptions, our approach produces an unbiased, asymptotically normal estimator with additional efficiency gain. Simulation studies and an application to a leukemia study show the satisfactory performance of the proposed method.  相似文献   

9.
Releases of GDP data undergo a series of revisions over time. These revisions have an impact on the results of macroeconometric models documented by the growing literature on real-time data applications. Revisions of U.S. GDP data can be explained and are partly predictable according to Faust et al. (J. Money Credit Bank. 37(3):403–419, 2005) or Fixler and Grimm (J. Product. Anal. 25:213–229, 2006). This analysis proposes the inclusion of mixed frequency data for forecasting GDP revisions. Thereby, the information set available around the first data vintage can be better exploited than the pure quarterly data. In-sample and out-of-sample results suggest that forecasts of GDP revisions can be improved by using mixed frequency data.  相似文献   

10.
This paper uses a modified rank score test for non-nested linear regression models. The modified rank score test is robust with respect to models with non-normal distributions and can be viewed as a robust version of the J test of Davidson and MacKinnon (Econometrica 49:781–793, 1981). Therefore, this test does not require a specification of error density function and is easy to implement. Also, a modified rank score test for multiple non-nested models is provided. Monte Carlo simulation results show that the test has good finite sample performances. Financial applications for two competing theories, the capital asset pricing model and the arbitrage pricing theory, are considered herein. Empirical evidence from the modified rank score test shows that the former is a better model for asset pricing.  相似文献   

11.
The co-integrated vector autoregression is extended to allow variables to be observed with classical measurement errors (ME). For estimation, the model is parametrized as a time invariant state-space form, and an accelerated expectation-maximization algorithm is derived. A simulation study shows that (i) the finite-sample properties of the maximum likelihood (ML) estimates and reduced rank test statistics are excellent (ii) neglected measurement errors will generally distort unit root inference due to a moving average component in the residuals, and (iii) the moving average component may–in principle–be approximated by a long autoregression, but a pure autoregression cannot identify the autoregressive structure of the latent process, and the adjustment coefficients are estimated with a substantial asymptotic bias. An application to the zero-coupon yield-curve is given.  相似文献   

12.
Starting from the Rao (Commun Stat Theory Methods 20:3325–3340, 1991) regression estimator, we propose a class of estimators for the unknown mean of a survey variable when auxiliary information is available. The bias and the mean square error of the estimators belonging to the class are obtained and the expressions for the optimum parameters minimizing the asymptotic mean square error are given in closed form. A simple condition allowing us to improve the classical regression estimator is worked out. Finally, in order to compare the performance of some estimators with the regression one, a simulation study is carried out when some population parameters are supposed to be unknown.  相似文献   

13.
In this paper we present a unified discussion of different approaches to the identification of smoothing spline analysis of variance (ANOVA) models: (i) the “classical” approach (in the line of Wahba in Spline Models for Observational Data, 1990; Gu in Smoothing Spline ANOVA Models, 2002; Storlie et al. in Stat. Sin., 2011) and (ii) the State-Dependent Regression (SDR) approach of Young in Nonlinear Dynamics and Statistics (2001). The latter is a nonparametric approach which is very similar to smoothing splines and kernel regression methods, but based on recursive filtering and smoothing estimation (the Kalman filter combined with fixed interval smoothing). We will show that SDR can be effectively combined with the “classical” approach to obtain a more accurate and efficient estimation of smoothing spline ANOVA models to be applied for emulation purposes. We will also show that such an approach can compare favorably with kriging.  相似文献   

14.
Clusters of galaxies are a useful proxy to trace the distribution of mass in the universe. By measuring the mass of clusters of galaxies on different scales, one can follow the evolution of the mass distribution (Martínez and Saar, Statistics of the Galaxy Distribution, 2002). It can be shown that finding galaxy clusters is equivalent to finding density contour clusters (Hartigan, Clustering Algorithms, 1975): connected components of the level set S c ≡{f>c} where f is a probability density function. Cuevas et al. (Can. J. Stat. 28, 367–382, 2000; Comput. Stat. Data Anal. 36, 441–459, 2001) proposed a nonparametric method for density contour clusters, attempting to find density contour clusters by the minimal spanning tree. While their algorithm is conceptually simple, it requires intensive computations for large datasets. We propose a more efficient clustering method based on their algorithm with the Fast Fourier Transform (FFT). The method is applied to a study of galaxy clustering on large astronomical sky survey data.  相似文献   

15.
Recent large scale simulations indicate that a powerful goodness-of-fit test for copulas can be obtained from the process comparing the empirical copula with a parametric estimate of the copula derived under the null hypothesis. A first way to compute approximate p-values for statistics derived from this process consists of using the parametric bootstrap procedure recently thoroughly revisited by Genest and Rémillard. Because it heavily relies on random number generation and estimation, the resulting goodness-of-fit test has a very high computational cost that can be regarded as an obstacle to its application as the sample size increases. An alternative approach proposed by the authors consists of using a multiplier procedure. The study of the finite-sample performance of the multiplier version of the goodness-of-fit test for bivariate one-parameter copulas showed that it provides a valid alternative to the parametric bootstrap-based test while being orders of magnitude faster. The aim of this work is to extend the multiplier approach to multivariate multiparameter copulas and study the finite-sample performance of the resulting test. Particular emphasis is put on elliptical copulas such as the normal and the t as these are flexible models in a multivariate setting. The implementation of the procedure for the latter copulas proves challenging and requires the extension of the Plackett formula for the t distribution to arbitrary dimension. Extensive Monte Carlo experiments, which could be carried out only because of the good computational properties of the multiplier approach, confirm in the multivariate multiparameter context the satisfactory behavior of the goodness-of-fit test.  相似文献   

16.
In this paper we present a review of population-based simulation for static inference problems. Such methods can be described as generating a collection of random variables {X n } n=1,…,N in parallel in order to simulate from some target density π (or potentially sequence of target densities). Population-based simulation is important as many challenging sampling problems in applied statistics cannot be dealt with successfully by conventional Markov chain Monte Carlo (MCMC) methods. We summarize population-based MCMC (Geyer, Computing Science and Statistics: The 23rd Symposium on the Interface, pp. 156–163, 1991; Liang and Wong, J. Am. Stat. Assoc. 96, 653–666, 2001) and sequential Monte Carlo samplers (SMC) (Del Moral, Doucet and Jasra, J. Roy. Stat. Soc. Ser. B 68, 411–436, 2006a), providing a comparison of the approaches. We give numerical examples from Bayesian mixture modelling (Richardson and Green, J. Roy. Stat. Soc. Ser. B 59, 731–792, 1997).  相似文献   

17.
In this paper, we study the robustness properties of several procedures for the joint estimation of shape and scale in a generalized Pareto model. The estimators that we primarily focus upon, most bias robust estimator (MBRE) and optimal MSE-robust estimator (OMSE), are one-step estimators distinguished as optimally robust in the shrinking neighbourhood setting; that is, they minimize the maximal bias, respectively, on such a specific neighbourhood, the maximal mean squared error (MSE). For their initialization, we propose a particular location–dispersion estimator, MedkMAD, which matches the population median and kMAD (an asymmetric variant of the median of absolute deviations) against the empirical counterparts. These optimally robust estimators are compared to the maximum-likelihood, skipped maximum-likelihood, Cramér–von-Mises minimum distance, method-of-medians, and Pickands estimators. To quantify their deviation from robust optimality, for each of these suboptimal estimators, we determine the finite-sample breakdown point and the influence function, as well as the statistical accuracy measured by asymptotic bias, variance, and MSE – all evaluated uniformly on shrinking neighbourhoods. These asymptotic findings are complemented by an extensive simulation study to assess the finite-sample behaviour of the considered procedures. The applicability of the procedures and their stability against outliers are illustrated for the Danish fire insurance data set from the package evir.  相似文献   

18.
In this paper, A variance decomposition approach to quantify the effects of endogenous and exogenous variables for nonlinear time series models is developed. This decomposition is taken temporally with respect to the source of variation. The methodology uses Monte Carlo methods to affect the variance decomposition using the ANOVA-like procedures proposed in Archer et al. (J. Stat. Comput. Simul. 58:99–120, 1997), Sobol’ (Math. Model. 2:112–118, 1990). The results of this paper can be used in investment problems, biomathematics and control theory, where nonlinear time series with multiple inputs are encountered.  相似文献   

19.
Often in observational studies of time to an event, the study population is a biased (i.e., unrepresentative) sample of the target population. In the presence of biased samples, it is common to weight subjects by the inverse of their respective selection probabilities. Pan and Schaubel (Can J Stat 36:111–127, 2008) recently proposed inference procedures for an inverse selection probability weighted (ISPW) Cox model, applicable when selection probabilities are not treated as fixed but estimated empirically. The proposed weighting procedure requires auxiliary data to estimate the weights and is computationally more intense than unweighted estimation. The ignorability of sample selection process in terms of parameter estimators and predictions is often of interest, from several perspectives: e.g., to determine if weighting makes a significant difference to the analysis at hand, which would in turn address whether the collection of auxiliary data is required in future studies; to evaluate previous studies which did not correct for selection bias. In this article, we propose methods to quantify the degree of bias corrected by the weighting procedure in the partial likelihood and Breslow-Aalen estimators. Asymptotic properties of the proposed test statistics are derived. The finite-sample significance level and power are evaluated through simulation. The proposed methods are then applied to data from a national organ failure registry to evaluate the bias in a post-kidney transplant survival model.  相似文献   

20.
Time series arising in practice often have an inherently irregular sampling structure or missing values, that can arise for example due to a faulty measuring device or complex time-dependent nature. Spectral decomposition of time series is a traditionally useful tool for data variability analysis. However, existing methods for spectral estimation often assume a regularly-sampled time series, or require modifications to cope with irregular or ‘gappy’ data. Additionally, many techniques also assume that the time series are stationary, which in the majority of cases is demonstrably not appropriate. This article addresses the topic of spectral estimation of a non-stationary time series sampled with missing data. The time series is modelled as a locally stationary wavelet process in the sense introduced by Nason et al. (J. R. Stat. Soc. B 62(2):271–292, 2000) and its realization is assumed to feature missing observations. Our work proposes an estimator (the periodogram) for the process wavelet spectrum, which copes with the missing data whilst relaxing the strong assumption of stationarity. At the centre of our construction are second generation wavelets built by means of the lifting scheme (Sweldens, Wavelet Applications in Signal and Image Processing III, Proc. SPIE, vol. 2569, pp. 68–79, 1995), designed to cope with irregular data. We investigate the theoretical properties of our proposed periodogram, and show that it can be smoothed to produce a bias-corrected spectral estimate by adopting a penalized least squares criterion. We demonstrate our method with real data and simulated examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号