共查询到14条相似文献,搜索用时 15 毫秒
1.
AbstractIn this article, we introduce three new classes of multivariate risk statistics, which can be considered as data-based versions of multivariate risk measures. These new classes are multivariate convex risk statistics, multivariate comonotonic convex risk statistics and multivariate empirical-law-invariant convex risk statistics, respectively. Representation results are provided. The arguments of proofs are mainly developed by ourselves. It turns out that all the relevant existing results in the literature are special cases of those obtained in this article. 相似文献
2.
3.
《Statistics》2012,46(6):1306-1328
ABSTRACTIn this paper, we consider testing the homogeneity of risk differences in independent binomial distributions especially when data are sparse. We point out some drawback of existing tests in either controlling a nominal size or obtaining powers through theoretical and numerical studies. The proposed test is designed to avoid the drawbacks of existing tests. We present the asymptotic null distribution and asymptotic power function for the proposed test. We also provide numerical studies including simulations and real data examples showing the proposed test has reliable results compared to existing testing procedures. 相似文献
4.
系统性金融风险分析框架的选取问题是理论与实务界对系统性金融风险研究争论的焦点之一。建立完善的分析框架需要立足于合理的宏观加总,而用于构建系统性金融风险分析框架的加总模式主要包括简易累加、新古典宏观加总和宏观审慎原则下的新加总模式。对不同加总模式下的系统性金融风险研究成果进行纵向梳理与横向比较分析发现:当前对于系统性金融风险的研究应着眼在货币量值加总的基础上,形成具备一定理论基础的整体分析框架。 相似文献
5.
Yang-Jin Kim 《Journal of applied statistics》2017,44(15):2778-2790
In this article, we analyze interval censored failure time data with competing risks. A new estimator for the cumulative incidence function is derived using an approximate likelihood and a test statistic to compare two samples is then obtained by extending Sun's test statistic. Small sample properties of the proposed methods are examined by conducting simulations and a cohort dataset from AIDS patients is analyzed as a real example. 相似文献
6.
As modeling efforts expand to a broader spectrum of areas the amount of computer time required to exercise the corresponding computer codes has become quite costly (several hours for a single run is not uncommon). This costly process can be directly tied to the complexity of the modeling and to the large number of input variables (often numbering in the hundreds) Further, the complexity of the modeling (usually involving systems of differential equations) makes the relationships among the input variables not mathematically tractable. In this setting it is desired to perform sensitivity studies of the input-output relationships. Hence, a judicious selection procedure for the choic of values of input variables is required, Latin hypercube sampling has been shown to work well on this type of problem. However, a variety of situations require that decisions and judgments be made in the face of uncertainty. The source of this uncertainty may be lack ul knowledge about probability distributions associated with input variables, or about different hypothesized future conditions, or may be present as a result of different strategies associated with a decision making process In this paper a generalization of Latin hypercube sampling is given that allows these areas to be investigated without making additional computer runs. In particular it is shown how weights associated with Latin hypercube input vectors may be rhangpd to reflect different probability distribution assumptions on key input variables and yet provide: an unbiased estimate of the cumulative distribution function of the output variable. This allows for different distribution assumptions on input variables to be studied without additional computer runs and without fitting a response surface. In addition these same weights can be used in a modified nonparametric Friedman test to compare treatments, Sample size requirements needed to apply the results of the work are also considered. The procedures presented in this paper are illustrated using a model associated with the risk assessment of geologic disposal of radioactive waste. 相似文献
7.
Darold T. Barnum John M. Gleason Matthew G. Karlaftis Glen T. Schumock Karen L. Shields Sonali Tandon 《Journal of applied statistics》2012,39(4):815-828
This paper describes a statistical method for estimating data envelopment analysis (DEA) score confidence intervals for individual organizations or other entities. This method applies statistical panel data analysis, which provides proven and powerful methodologies for diagnostic testing and for estimation of confidence intervals. DEA scores are tested for violations of the standard statistical assumptions including contemporaneous correlation, serial correlation, heteroskedasticity and the absence of a normal distribution. Generalized least squares statistical models are used to adjust for violations that are present and to estimate valid confidence intervals within which the true efficiency of each individual decision-making unit occurs. This method is illustrated with two sets of panel data, one from large US urban transit systems and the other from a group of US hospital pharmacies. 相似文献
8.
9.
Proportional hazards model with the biomarker–treatment interaction plays an important role in the survival analysis of the subset treatment effect. A threshold parameter for a continuous biomarker variable defines the subset of patients who can benefit or lose from a certain new treatment. In this article, we focus on a continuous threshold effect using the rectified linear unit and propose a gradient descent method to obtain the maximum likelihood estimation of the regression coefficients and the threshold parameter simultaneously. Under certain regularity conditions, we prove the consistency, asymptotic normality and provide a robust estimate of the covariance matrix when the model is misspecified. To illustrate the finite sample properties of the proposed methods, we simulate data to evaluate the empirical biases, the standard errors and the coverage probabilities for both the correctly specified models and misspecified models. The proposed continuous threshold model is applied to a prostate cancer data with serum prostatic acid phosphatase as a biomarker. 相似文献
10.
Philip J. Byrne 《统计学通讯:理论与方法》2013,42(3):555-569
This paper addresses the problem of testing the multivariate linear hypothesis when the errors follow an antedependence model (Gabriel, 1961, 1962). Antedependence can be formulated as a nonstationary autoregressive model of general order. Three test statistics are derived that provide analogs to three commonly used MANOVA statistics: Wilks' Lambda, the Lawley-Hotelling Trace, and Pillai's Trace. Formulas are given for each of these statistics that show how they can be obtained From any statistical computing package that calculates the usual MANOVA statistics. These antedependent statistics would be appropriate in analyzing certain multivariate data sets in which repeated measurements are taken on the same subjects over a period of time. 相似文献
11.
Frits Bijleveld Jacques Commandeur Phillip Gould Siem Jan Koopman 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2008,171(1):265-277
Summary. Risk is at the centre of many policy decisions in companies, governments and other institutions. The risk of road fatalities concerns local governments in planning countermeasures, the risk and severity of counterparty default concerns bank risk managers daily and the risk of infection has actuarial and epidemiological consequences. However, risk cannot be observed directly and it usually varies over time. We introduce a general multivariate time series model for the analysis of risk based on latent processes for the exposure to an event, the risk of that event occurring and the severity of the event. Linear state space methods can be used for the statistical treatment of the model. The new framework is illustrated for time series of insurance claims, credit card purchases and road safety. It is shown that the general methodology can be effectively used in the assessment of risk. 相似文献
12.
Non‐likelihood‐based methods for repeated measures analysis of binary data in clinical trials can result in biased estimates of treatment effects and associated standard errors when the dropout process is not completely at random. We tested the utility of a multiple imputation approach in reducing these biases. Simulations were used to compare performance of multiple imputation with generalized estimating equations and restricted pseudo‐likelihood in five representative clinical trial profiles for estimating (a) overall treatment effects and (b) treatment differences at the last scheduled visit. In clinical trials with moderate to high (40–60%) dropout rates with dropouts missing at random, multiple imputation led to less biased and more precise estimates of treatment differences for binary outcomes based on underlying continuous scores. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
13.
In this note, we consider the classical insurance risk model with heavy-tailed claim distributions. By using the Pollaczek–Khinchin Formula, we provide some sensitivity analysis on the ruin probability. 相似文献
14.
Sander Greenland 《Journal of the Royal Statistical Society. Series A, (Statistics in Society)》2005,168(2):267-306
Summary. Conventional analytic results do not reflect any source of uncertainty other than random error, and as a result readers must rely on informal judgments regarding the effect of possible biases. When standard errors are small these judgments often fail to capture sources of uncertainty and their interactions adequately. Multiple-bias models provide alternatives that allow one systematically to integrate major sources of uncertainty, and thus to provide better input to research planning and policy analysis. Typically, the bias parameters in the model are not identified by the analysis data and so the results depend completely on priors for those parameters. A Bayesian analysis is then natural, but several alternatives based on sensitivity analysis have appeared in the risk assessment and epidemiologic literature. Under some circumstances these methods approximate a Bayesian analysis and can be modified to do so even better. These points are illustrated with a pooled analysis of case–control studies of residential magnetic field exposure and childhood leukaemia, which highlights the diminishing value of conventional studies conducted after the early 1990s. It is argued that multiple-bias modelling should become part of the core training of anyone who will be entrusted with the analysis of observational data, and should become standard procedure when random error is not the only important source of uncertainty (as in meta-analysis and pooled analysis). 相似文献