共查询到20条相似文献,搜索用时 375 毫秒
1.
Chen Li 《统计学通讯:理论与方法》2017,46(8):3934-3948
This article further investigates the allocation of coverage limits and deductibles to multiple independent risks from the viewpoint of policyholders with increasing utility functions. In a more general setup, we develop the usual stochastic orders on the retained loss, which either generalize or supplement the corresponding results due to Lu and Meng (2011) and Hu and Wang (2014). Also, the most unfavorable and favorable allocations of coverage limits and deductibles are developed for multiple risks with dominated reversed hazard rates and hazard rates, respectively. 相似文献
2.
Junyong Park Jayson D. Wilbur Jayanta K. Ghosh Cindy H. Nakatsu Corinne Ackerman 《统计学通讯:模拟与计算》2013,42(4):855-869
We adopt boosting for classification and selection of high-dimensional binary variables for which classical methods based on normality and non singular sample dispersion are inapplicable. Boosting seems particularly well suited for binary variables. We present three methods of which two combine boosting with the relatively classical variable selection methods developed in Wilbur et al. (2002). Our primary interest is variable selection in classification with small misclassification error being used as validation of proposed method for variable selection. Two of the new methods perform uniformly better than Wilbur et al. (2002) in one set of simulated and three real life examples. 相似文献
3.
In this article, we consider two different shared frailty regression models under the assumption of Gompertz as baseline distribution. Mostly assumption of gamma distribution is considered for frailty distribution. To compare the results with gamma frailty model, we consider the inverse Gaussian shared frailty model also. We compare these two models to a real life bivariate survival data set of acute leukemia remission times (Freireich et al., 1963). Analysis is performed using Markov Chain Monte Carlo methods. Model comparison is made using Bayesian model selection criterion and a well-fitted model is suggested for the acute leukemia data. 相似文献
4.
This paper is based on the application of a Bayesian model to a clinical trial study to determine a more effective treatment to lower mortality rates and consequently to increase survival times among patients with lung cancer. In this study, Qian et al. [13] strived to determine if a Weibull survival model can be used to decide whether to stop a clinical trial. The traditional Gibbs sampler was used to estimate the model parameters. This paper proposes to use the independent steady-state Gibbs sampling (ISSGS) approach, introduced by Dunbar et al. [3], to improve the original Gibbs sampler in multidimensional problems. It is demonstrated that ISSGS provides accuracy with unbiased estimation and improves the performance and convergence of the Gibbs sampler in this application. 相似文献
5.
Lindeman et al. [12] provide a unique solution to the relative importance of correlated predictors in multiple regression by averaging squared semi-partial correlations obtained for each predictor across all p! orderings. In this paper, we propose a series of predictor sensitivity statistics that complement the variance decomposition procedure advanced by Lindeman et al. [12]. First, we detail the logic of averaging over orderings as a technique of variance partitioning. Second, we assess predictors by conditional dominance analysis, a qualitative procedure designed to overcome defects in the Lindeman et al. [12] variance decomposition solution. Third, we introduce a suite of indices to assess the sensitivity of a predictor to model specification, advancing a series of sensitivity-adjusted contribution statistics that allow for more definite quantification of predictor relevance. Fourth, we describe the analytic efficiency of our proposed technique against the Budescu conditional dominance solution to the uneven contribution of predictors across all p! orderings. 相似文献
6.
In this paper we propose a new lifetime model for multivariate survival data in presence of surviving fractions and examine some of its properties. Its genesis is based on situations in which there are m types of unobservable competing causes, where each cause is related to a time of occurrence of an event of interest. Our model is a multivariate extension of the univariate survival cure rate model proposed by Rodrigues et al. [37]. The inferential approach exploits the maximum likelihood tools. We perform a simulation study in order to verify the asymptotic properties of the maximum likelihood estimators. The simulation study also focus on size and power of the likelihood ratio test. The methodology is illustrated on a real data set on customer churn data. 相似文献
7.
We propose a new ratio type estimator for estimating the finite population mean using two auxiliary variables in stratified two-phase sampling. Expressions for bias and mean squared error of the proposed estimator are derived up to the first order of approximation. The proposed estimator is more efficient than the usual stratified sample mean estimator, traditional stratified ratio estimator and some other stratified estimators including Bahl and Tuteja (1991), Chami et al. (2012), Chand (1975), Choudhury and Singh (2012), Hamad et al. (2013), Vishwakarma and Gangele (2014), Sanaullah et al. (2014), and Chanu and Singh (2014). 相似文献
8.
M. Pilar Alonso Asunción Beamonte Manuel Salvador 《Journal of applied statistics》2015,42(5):1043-1063
In this paper a methodology for the delineation of local labour markets (LLMs) using evolutionary algorithms is proposed. This procedure, based on that in Flórez-Revuelta et al. [13,14], introduces three modifications. First, initial groups of municipalities with a minimum size requirement are built using the travel time between them. Second, a not fully random initiation algorithm is proposed. And third, as a final stage of the procedure, a contiguity step is implemented. These modifications significantly decrease the computational times of the algorithm (up to a 99%) without any deterioration of the quality of the solutions. The optimization algorithm may give a set of potential solutions with very similar values with respect to the objective function what would lead to different partitions, both in terms of number of markets and their composition. In order to capture their common aspects an algorithm based on a cluster partitioning of k-means type is presented. This stage of the procedure also provides a ranking of LLMs foci useful for planners and administrations in decision-making processes on issues related to labour activities. Finally, to evaluate the performance of the algorithm a toy example with artificial data is analysed. The full methodology is illustrated through a real commuting data set of the region of Aragón (Spain). 相似文献
9.
Viswanathan Ramakrishnan 《统计学通讯:模拟与计算》2013,42(3):405-418
In many genetic analyses of dichotomous twin data, odds ratios have been used to test hypotheses on heritability and shared common environment effects of a given disease (Lichtenstein et al., 2000; Ahlbom et al., 1997; Ramakrishnan et al., 1992, 4). However, estimates of these two effects have not been dealt with in the literature. In epidemiology, the attributable fraction (AF), a function of the odds ratio and the prevalence of the risk factor has been used to describe the contribution of a risk factor to a disease in a given population (Leviton, 1973). In this article, we adapt the AF to quantify the heritability and the shared common environment. Twin data on cancer, gallstone disease and phobia are used to illustrate the applicability of the AF estimate as a measure of heritability. 相似文献
10.
Shesh N. Rai Jianmin Pan Xiaobin Yuan Jianguo Sun Melissa M. Hudson Deo K. Srivastava 《统计学通讯:理论与方法》2013,42(17):3117-3133
New drug discovery in the pediatrics has dramatically improved survival, but with long- term adverse events. This motivates the examination of adverse outcomes such as long-term toxicity in a phase IV trial. An ideal approach to monitor long-term toxicity is to systematically follow the survivors, which is generally not feasible. Instead, cross-sectional surveys are conducted in Hudson et al. (2007), with one of the objectives to estimate the cumulative incidence rates along with specific interest in fixed-term (5 or 10 year) rates. We present inference procedures based on current status data to our motivating example with very interesting findings. 相似文献
11.
Robert M. Adams 《统计学通讯:理论与方法》2013,42(13):2425-2442
This article generalizes results from Park et al. (1998) and Adams et al. (1999) on semiparametric efficient estimation of panel models. The form of semiparametric efficient estimators depends on the statistical assumptions imposed. Normality assumptions on the transitory error are sometimes inappropriate. We relax the normality assumption used in the articles above to derive more general semiparametric efficient estimators. These estimators are illustrated in a Monte Carlo simulation and an analysis of banking productivity. 相似文献
12.
Tony Vangeneugden Geert Verbeke Clarice G.B. Demétrio 《Journal of applied statistics》2011,38(2):215-232
Vangeneugden et al. [15] derived approximate correlation functions for longitudinal sequences of general data type, Gaussian and non-Gaussian, based on generalized linear mixed-effects models (GLMM). Their focus was on binary sequences, as well as on a combination of binary and Gaussian sequences. Here, we focus on the specific case of repeated count data, important in two respects. First, we employ the model proposed by Molenberghs et al. [13], which generalizes at the same time the Poisson-normal GLMM and the conventional overdispersion models, in particular the negative-binomial model. The model flexibly accommodates data hierarchies, intra-sequence correlation, and overdispersion. Second, means, variances, and joint probabilities can be expressed in closed form, allowing for exact intra-sequence correlation expressions. Next to the general situation, some important special cases such as exchangeable clustered outcomes are considered, producing insightful expressions. The closed-form expressions are contrasted with the generic approximate expressions of Vangeneugden et al. [15]. Data from an epileptic-seizures trial are analyzed and correlation functions derived. It is shown that the proposed extension strongly outperforms the classical GLMM. 相似文献
13.
Here, we apply the smoothing technique proposed by Chaubey et al. (2007) for the empirical survival function studied in Bagai and Prakasa Rao (1991) for a sequence of stationary non-negative associated random variables.The derivative of this estimator in turn is used to propose a nonparametric density estimator. The asymptotic properties of the resulting estimators are studied and contrasted with some other competing estimators. A simulation study is carried out comparing the recent estimator based on the Poisson weights (Chaubey et al., 2011) showing that the two estimators have comparable finite sample global as well as local behavior. 相似文献
14.
Housila P. Singh 《统计学通讯:理论与方法》2013,42(23):4222-4238
This article considers some classes of estimators of the population median of the study variable using information on an auxiliary variable with their properties under large sample approximation. Asymptotic optimum estimator (AOE) in each class of estimators has been investigated along with the approximate mean square error formulae. It has been shown that the proposed classes of estimators are better than these considered by Gross (1980), Kuk and Mak (1989), Singh et al. (2003a), and Al and Cingi (2009). An empirical study is carried out to judge the merits of the suggested class of estimators over other existing estimators. 相似文献
15.
In this article, we have evaluated the performance of different forecasters and tested association between their performances for different pairs of variables. We have used three data sets of track records of professional U.S. economic forecasters participating in the Blue Chip consensus forecasting service (the data sets contain the root mean square errors (RMSE) of different forecasters for different years). To evaluate the performance of forecasters we have covered three well-known tests, namely the usual F test (cf. Fisher (1923)), Kruskal Wallis test (cf. Kruskal and Wallis (1952)), and Extension of Median test (cf. Daniel (1990)). To test the association between the forecaster's performances for different pairs of variables, we have considered Gini mean correlation coefficient rg1 (cf. Yitzhaki, S., and Olkin, I. (1991) and Yitzhaki (2003)), Modified rank correlation coefficient (cf. Zimmerman (1994)) and three modifications of Spearman rank correlation coefficient. We have observed that different forecasters do not necessarily offer same average performance. Moreover, an evidence of association between two criteria does not always lead us reaching at the same decision. The outcomes of the study may help the practitioners in selecting the best forecaster(s) for policymaking purposes. 相似文献
16.
There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et al., 2001; Martens et al., 2004). The present article provides some illustrative analysis of how long memory may arise from the accumulative process underlying realized volatility. The article also uses results in Lieberman and Phillips (2004, 2005) to refine statistical inference about d by higher order theory. Standard asymptotic theory has an O(n ?1/2) error rate for error rejection probabilities, and the theory used here refines the approximation to an error rate of o(n ?1/2). The new formula is independent of unknown parameters, is simple to calculate and user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et al. (2001) and Martens et al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory, and generally confirms earlier findings. 相似文献
17.
A Bottom-Up Dynamic Model of Portfolio Credit Risk with Stochastic Intensities and Random Recoveries
Tomasz R. Bielecki Areski Cousin Stéphane Crépey Alexander Herbertsson 《统计学通讯:理论与方法》2014,43(7):1362-1389
In Bielecki et al. (2014a), the authors introduced a Markov copula model of portfolio credit risk where pricing and hedging can be done in a sound theoretical and practical way. Further theoretical backgrounds and practical details are developed in Bielecki et al. (2014b,c) where numerical illustrations assumed deterministic intensities and constant recoveries. In the present paper, we show how to incorporate stochastic default intensities and random recoveries in the bottom-up modeling framework of Bielecki et al. (2014a) while preserving numerical tractability. These two features are of primary importance for applications like CVA computations on credit derivatives (Assefa et al., 2011; Bielecki et al., 2012), as CVA is sensitive to the stochastic nature of credit spreads and random recoveries allow to achieve satisfactory calibration even for “badly behaved” data sets. This article is thus a complement to Bielecki et al. (2014a), Bielecki et al. (2014b) and Bielecki et al. (2014c). 相似文献
18.
M. Taghipour 《统计学通讯:理论与方法》2017,46(4):1694-1708
19.
Simard et al. [16 17] proposed a transformation distance called “tangent distance” (TD) which can make pattern recognition be efficient. The key idea is to construct a distance measure which is invariant with respect to some chosen transformations. In this research, we provide a method using adaptive TD based on an idea inspired by “discriminant adaptive nearest neighbor” [7]. This method is relatively easy compared with many other complicated ones. A real handwritten recognition data set is used to illustrate our new method. Our results demonstrate that the proposed method gives lower classification error rates than those by standard implementation of neural networks and support vector machines and is as good as several other complicated approaches. 相似文献
20.
Abstract The present paper focuses attention on the sensitivity of technical inefficiency to most commonly used one‐sided distributions of the inefficiency error term, namely the truncated normal, the half‐normal, and the exponential distributions. A generalized version of the half‐normal, which does not embody the zero‐mean restriction, is also explored. For each distribution, the likelihood function and the counterpart of the estimator of technical efficiency are explicitly stated (Jondrow, J., Lovell, C. A. K., Materov, I. S., Schmidt, P. ([1982]), On estimation of technical inefficiency in the stochastic frontier production function model, J. Econometrics19:233–238). Based on our panel data set, related to Tunisian manufacturing firms over the period 1983–1993, formal tests lead to a strong rejection of the zero‐mean restriction embodied in the half normal distribution. Our main conclusion is that the degree of measured inefficiency is very sensitive to the postulated assumptions about the distribution of the one‐sided error term. The estimated inefficiency indices are, however, unaffected by the choice of the functional form for the production function. 相似文献