首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of this paper is to build a model for aggregate losses which constitutes a crucial step in evaluating premiums for health insurance systems. It aims at obtaining the predictive distribution of the aggregate loss within each age class of insured persons over the time horizon involved in planning employing the Bayesian methodology. The model proposed using the Bayesian approach is a generalization of the collective risk model, a commonly used model for analysing risk of an insurance system. Aggregate loss prediction is based on past information on size of loss, number of losses and size of population at risk. In modelling the frequency and severity of losses, the number of losses is assumed to follow a negative binomial distribution, individual loss sizes are independent and identically distributed exponential random variables, while the number of insured persons in a finite number of possible age groups is assumed to follow the multinomial distribution. Prediction of aggregate losses is based on the Gibbs sampling algorithm which incorporates the missing data approach.  相似文献   

2.
Major sources of information for the estimation of the size of the fish stocks and the rate of their exploitation are samples from which the age composition of catches may be determined. However, the age composition in the catches often varies as a result of several factors. Stratification of the sampling is desirable, because it leads to better estimates of the age composition, and the corresponding variances and covariances. The analysis is impeded by the fact that the response is ordered categorical. This paper introduces an easily applicable method to analyze such data. The method combines continuation-ratio logits and the theory for generalized linear mixed models. Continuation-ratio logits are designed for ordered multinomial response and have the feature that the associated log-likelihood splits into separate terms for each category levels. Thus, generalized linear mixed models can be applied separately to each level of the logits. The method is illustrated by the analysis of age-composition data collected from the Danish sandeel fishery in the North Sea in 1993. The significance of possible sources of variation is evaluated, and formulae for estimating the proportions of each age group and their variance-covariance matrix are derived.  相似文献   

3.
This paper addresses the problem of detecting a mixture of parallel regression lines when information about group member¬ship of individual cases is not given. The problem is approached as a missing variable problem, with the missing variables being the dummy variables that code for groups. If a mixture of par¬allel regression lines with normally distributed error terms is present, a simple regression model without dummy variables will produce residuals that follow approximately a mixed normal dis¬tribution. In a simulation studyr several goodness-of-fit tests of normality were used to test the residuals obtained from mis-specified models that excluded dummy variables, Factors varied in the simulation included the number and the separation of the parallel lines and the sample size, The goodness-of-fit test based on the sample kurtosis (82) was overall most powerful in detecting mixtures of parallel regression lines, Applications are discussed.  相似文献   

4.
Here we consider a multinomial probit regression model where the number of variables substantially exceeds the sample size and only a subset of the available variables is associated with the response. Thus selecting a small number of relevant variables for classification has received a great deal of attention. Generally when the number of variables is substantial, sparsity-enforcing priors for the regression coefficients are called for on grounds of predictive generalization and computational ease. In this paper, we propose a sparse Bayesian variable selection method in multinomial probit regression model for multi-class classification. The performance of our proposed method is demonstrated with one simulated data and three well-known gene expression profiling data: breast cancer data, leukemia data, and small round blue-cell tumors. The results show that compared with other methods, our method is able to select the relevant variables and can obtain competitive classification accuracy with a small subset of relevant genes.  相似文献   

5.
The number of parameters mushrooms in a linear mixed effects (LME) model in the case of multivariate repeated measures data. Computation of these parameters is a real problem with the increase in the number of response variables or with the increase in the number of time points. The problem becomes more intricate and involved with the addition of additional random effects. A multivariate analysis is not possible in a small sample setting. We propose a method to estimate these many parameters in bits and pieces from baby models, by taking a subset of response variables at a time, and finally using these bits and pieces at the end to get the parameter estimates for the mother model, with all variables taken together. Applying this method one can calculate the fixed effects, the best linear unbiased predictions (BLUPs) for the random effects in the model, and also the BLUPs at each time of observation for each response variable, to monitor the effectiveness of the treatment for each subject. The proposed method is illustrated with an example of multiple response variables measured over multiple time points arising from a clinical trial in osteoporosis.  相似文献   

6.
Latent class analysis (LCA) has been found to have important applications in social and behavioural sciences for modelling categorical response variables, and non-response is typical when collecting data. In this study, the non-response mainly included ‘contingency questions’ and real ‘missing data’. The primary objective of this study was to evaluate the effects of some potential factors on model selection indices in LCA with non-response data. We simulated missing data with contingency question and evaluated the accuracy rates of eight information criteria for selecting the correct models. The results showed that the main factors are latent class proportions, conditional probabilities, sample size, the number of items, the missing data rate and the contingency data rate. Interactions of the conditional probabilities with class proportions, sample size and the number of items are also significant. From our simulation results, the impact of missing data and contingency questions can be amended by increasing the sample size or the number of items.  相似文献   

7.
Paired data have been widely collected in the efficiency studies of a new method against an established method in environmental, ecological and medical studies. For example, in comparative fishing studies, ability of catching target species (fish catch) or reducing the catch of non-target species (fish bycatch) is usually investigated through a paired design. These paired fish catches by weight are generally skewed and continuous, but with a significant portion of exact zeros (no catch). Such zero-inflated continuous data are traditionally handled by two-part models where the zero and positive components are handled separately; however, this separation generally destroys paired structure, and thus may result in substantial difficulty in characterizing the relative efficiency between two methods. To overcome this problem, we consider compound Poisson mixed model for paired data with which the zero and non-zero components are characterized in an integral way. In our approach, the clustering effects by pair are captured by incorporating relevant random effects. Our model is estimated using orthodox best linear unbiased predictor approach. Unlike two-part models, our approach unifies inferences of zero and positive components. Our method is illustrated with analyses of winter flounder bycatch data and ultrasound safety data.  相似文献   

8.
Generalized linear models (GLMs) are widely studied to deal with complex response variables. For the analysis of categorical dependent variables with more than two response categories, multivariate GLMs are presented to build the relationship between this polytomous response and a set of regressors. Traditional variable selection approaches have been proposed for the multivariate GLM with a canonical link function when the number of parameters is fixed in the literature. However, in many model selection problems, the number of parameters may be large and grow with the sample size. In this paper, we present a new selection criterion to the model with a diverging number of parameters. Under suitable conditions, the criterion is shown to be model selection consistent. A simulation study and a real data analysis are conducted to support theoretical findings.  相似文献   

9.
We propose a new approach to the selection of partially linear models based on the conditional expected prediction square loss function, which is estimated using the bootstrap. Because of the different speeds of convergence of the linear and the nonlinear parts, a key idea is to select each part separately. In the first step, we select the nonlinear components using an ' m -out-of- n ' residual bootstrap that ensures good properties for the nonparametric bootstrap estimator. The second step selects the linear components from the remaining explanatory variables, and the non-zero parameters are selected based on a two-level residual bootstrap. We show that the model selection procedure is consistent under some conditions, and our simulations suggest that it selects the true model most often than the other selection procedures considered.  相似文献   

10.
Mixture separation for mixed-mode data   总被引:3,自引:0,他引:3  
One possible approach to cluster analysis is the mixture maximum likelihood method, in which the data to be clustered are assumed to come from a finite mixture of populations. The method has been well developed, and much used, for the case of multivariate normal populations. Practical applications, however, often involve mixtures of categorical and continuous variables. Everitt (1988) and Everitt and Merette (1990) recently extended the normal model to deal with such data by incorporating the use of thresholds for the categorical variables. The computations involved in this model are so extensive, however, that it is only feasible for data containing very few categorical variables. In the present paper we consider an alternative model, known as the homogeneous Conditional Gaussian model in graphical modelling and as the location model in discriminant analysis. We extend this model to the finite mixture situation, obtain maximum likelihood estimates for the population parameters, and show that computation is feasible for an arbitrary number of variables. Some data sets are clustered by this method, and a small simulation study demonstrates characteristics of its performance.  相似文献   

11.
Given a set of possible models for variables X and a set of possible parameters for each model, the Bayesian estimate of the probability distribution for X given observed data is obtained by averaging over the possible models and their parameters. An often-used approximation for this estimate is obtained by selecting a single model and averaging over its parameters. The approximation is useful because it is computationally efficient, and because it provides a model that facilitates understanding of the domain. A common criterion for model selection is the posterior probability of the model. Another criterion for model selection, proposed by San Martini and Spezzafari (1984), is the predictive performance of a model for the next observation to be seen. From the standpoint of domain understanding, both criteria are useful, because one identifies the model that is most likely, whereas the other identifies the model that is the best predictor of the next observation. To highlight the difference, we refer to the posterior-probability and alternative criteria as the scientific criterion (SC) and engineering criterion (EC), respectively. When we are interested in predicting the next observation, the model-averaged estimate is at least as good as that produced by EC, which itself is at least as good as the estimate produced by SC. We show experimentally that, for Bayesian-network models containing discrete variables only, the predictive performance of the model average can be significantly better than those of single models selected by either criterion, and that differences between models selected by the two criterion can be substantial.  相似文献   

12.
闫懋博  田茂再 《统计研究》2021,38(1):147-160
Lasso等惩罚变量选择方法选入模型的变量数受到样本量限制。文献中已有研究变量系数显著性的方法舍弃了未选入模型的变量含有的信息。本文在变量数大于样本量即p>n的高维情况下,使用随机化bootstrap方法获得变量权重,在计算适应性Lasso时构建选择事件的条件分布并剔除系数不显著的变量,以得到最终估计结果。本文的创新点在于提出的方法突破了适应性Lasso可选变量数的限制,当观测数据含有大量干扰变量时能够有效地识别出真实变量与干扰变量。与现有的惩罚变量选择方法相比,多种情境下的模拟研究展示了所提方法在上述两个问题中的优越性。实证研究中对NCI-60癌症细胞系数据进行了分析,结果较以往文献有明显改善。  相似文献   

13.
A discrimination procedure, based on the location model is described and suggested for use in situation where the discriminating variables are mixtures of continuous and binary variables. Some procedures that have been previously employed, in a similar situation, like Fisher's linear discriminant function and the logistic regression were compared with this method using error rate (ER). Optimal ERs for these procedures are reported using real and simulated data for the case of varying sample size and number of continuous and binary variables and were used as a measure for assessing the performance of the various procedures. The suggested procedure performed considerably better in the cases considered and never did produce a result that is poor when compared with other procedures. Hence, the suggested procedure might be considered for such situations.  相似文献   

14.
空间面板数据模型由于考虑了经济变量间的空间相关性,其优势日益凸显,已成为计量经济学的热点研究领域。将空间相关性与动态模式同时扩展到面板模型中的空间动态面板模型,不仅考虑了经济变量之间的空间相关性,还考虑了时间上的滞后性,是空间面板模型的发展,增强了模型的解释力。考虑一种带固定个体效应、因变量的时间滞后项、因变量与随机误差项均存在空间自相关性的空间动态面板回归模型,提出了在个体数n和时间数T都很大,且T相对地大于n的条件下空间动态面板模型中时间滞后效应存在性的LM和LR检验方法,其检验方法包括联合检验、一维及二维的边际和条件检验;推导出这些检验在零假设下的极限分布;其极限分布均服从卡方分布。通过模拟试验研究检验统计量的小样本性质,结果显示其具有优良的统计性质。  相似文献   

15.
Longitudinal imaging studies have moved to the forefront of medical research due to their ability to characterize spatio-temporal features of biological structures across the lifespan. Valid inference in longitudinal imaging requires enough flexibility of the covariance model to allow reasonable fidelity to the true pattern. On the other hand, the existence of computable estimates demands a parsimonious parameterization of the covariance structure. Separable (Kronecker product) covariance models provide one such parameterization in which the spatial and temporal covariances are modeled separately. However, evaluating the validity of this parameterization in high dimensions remains a challenge. Here we provide a scientifically informed approach to assessing the adequacy of separable (Kronecker product) covariance models when the number of observations is large relative to the number of independent sampling units (sample size). We address both the general case, in which unstructured matrices are considered for each covariance model, and the structured case, which assumes a particular structure for each model. For the structured case, we focus on the situation where the within-subject correlation is believed to decrease exponentially in time and space as is common in longitudinal imaging studies. However, the provided framework equally applies to all covariance patterns used within the more general multivariate repeated measures context. Our approach provides useful guidance for high dimension, low-sample size data that preclude using standard likelihood-based tests. Longitudinal medical imaging data of caudate morphology in schizophrenia illustrate the approaches appeal.  相似文献   

16.
We investigate how to combine marginal assessments about the values that random variables assume separately into a model for the values that they assume jointly, when (i) these marginal assessments are modelled by means of coherent lower previsions and (ii) we have the additional assumption that the random variables are forward epistemically irrelevant to each other. We consider and provide arguments for two possible combinations, namely the forward irrelevant natural extension and the forward irrelevant product, and we study the relationships between them. Our treatment also uncovers an interesting connection between the behavioural theory of coherent lower previsions, and Shafer and Vovk's game-theoretic approach to probability theory.  相似文献   

17.
Summary. When a number of distinct models contend for use in prediction, the choice of a single model can offer rather unstable predictions. In regression, stochastic search variable selection with Bayesian model averaging offers a cure for this robustness issue but at the expense of requiring very many predictors. Here we look at Bayes model averaging incorporating variable selection for prediction. This offers similar mean-square errors of prediction but with a vastly reduced predictor space. This can greatly aid the interpretation of the model. It also reduces the cost if measured variables have costs. The development here uses decision theory in the context of the multivariate general linear model. In passing, this reduced predictor space Bayes model averaging is contrasted with single-model approximations. A fast algorithm for updating regressions in the Markov chain Monte Carlo searches for posterior inference is developed, allowing many more variables than observations to be contemplated. We discuss the merits of absolute rather than proportionate shrinkage in regression, especially when there are more variables than observations. The methodology is illustrated on a set of spectroscopic data used for measuring the amounts of different sugars in an aqueous solution.  相似文献   

18.
Directed acyclic graph (DAG) models—also called Bayesian networks—are widely used in probabilistic reasoning, machine learning and causal inference. If latent variables are present, then the set of possible marginal distributions over the remaining (observed) variables is generally not represented by any DAG. Larger classes of mixed graphical models have been introduced to overcome this; however, as we show, these classes are not sufficiently rich to capture all the marginal models that can arise. We introduce a new class of hyper‐graphs, called mDAGs, and a latent projection operation to obtain an mDAG from the margin of a DAG. We show that each distinct marginal of a DAG model is represented by at least one mDAG and provide graphical results towards characterizing equivalence of these models. Finally, we show that mDAGs correctly capture the marginal structure of causally interpreted DAGs under interventions on the observed variables.  相似文献   

19.
We consider the estimation of Poisson regression models in which structural variation in a subset of the parameters is permitted. It is noted that coventional estimation algorithms are likely to impose restrictions on the number of explanatory variables and the number of structural regimes. We propose an alternative algorithm that implements partitioned matrix inversion and thereby avoids restictions on the size of the model. The algorithm is applied to a model of shopping behavior Adjustments in the algorithm necessary for dealing with censored data are detailed.  相似文献   

20.
Recently, non‐uniform sampling has been suggested in microscopy to increase efficiency. More precisely, proportional to size (PPS) sampling has been introduced, where the probability of sampling a unit in the population is proportional to the value of an auxiliary variable. In the microscopy application, the sampling units are fields of view, and the auxiliary variables are easily observed approximations to the variables of interest. Unfortunately, often some auxiliary variables vanish, that is, are zero‐valued. Consequently, part of the population is inaccessible in PPS sampling. We propose a modification of the design based on a stratification idea, for which an optimal solution can be found, using a model‐assisted approach. The new optimal design also applies to the case where ‘vanish’ refers to missing auxiliary variables and has independent interest in sampling theory. We verify robustness of the new approach by numerical results, and we use real data to illustrate the applicability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号