首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   12篇
  免费   0篇
统计学   12篇
  2013年   1篇
  2009年   3篇
  2008年   1篇
  2006年   1篇
  2004年   1篇
  2002年   2篇
  2001年   2篇
  1998年   1篇
排序方式: 共有12条查询结果,搜索用时 15 毫秒
1.
Suppose that one wishes to rank k normal populations, each with common variance σ2 and unknown means θi (i=1,2,…,k). Independent samples of size n are taken from each population, and the sample averages are used to rank the populations. In this paper, we investigate what sample sizes, n, are necessary to attain “good” rankings under various loss functions. Section discusses various loss functions and their interpretation. Section 2 gives the solution for a reasonable non-parametric loss function. Section 3 gives the solution for a reasonable parameteric loss function.  相似文献   
2.
Abstract.  We consider the consistency of the Bayes factor in goodness of fit testing for a parametric family of densities against a non-parametric alternative. Sufficient conditions for consistency of the Bayes factor are determined and demonstrated with priors using certain mixtures of triangular densities.  相似文献   
3.
Survival studies often collect information about covariates. If these covariates are believed to contain information about the life-times, they may be considered when estimating the underlying life-time distribution. We propose a non-parametric estimator which uses the recorded information about the covariates. Various forms of incomplete data, e.g. right-censored data, are allowed. The estimator is the conditional mean of the true empirical survival function given the observed history, and it is derived using a general filtering formula. Feng & Kurtz (1994) showed that the estimator is the Kaplan–Meier estimator in the case of right-censoring when using the observed life-times and censoring-times as the observed history. We take the same approach as Feng & Kurtz (1994) but in addition we incorporate the recorded information about the covariates in the observed history. Two models are considered and in both cases the Kaplan–Meier estimator is a special case of the estimator. In a simulation study the estimator is compared with the Kaplan–Meier estimator in small samples.  相似文献   
4.
The proportional hazards assumption of the Cox model does sometimes not hold in practise. An example is a treatment effect that decreases with time. We study a general multiplicative intensity model allowing the influence of each covariate to vary non-parametrically with time. An efficient estimation procedure for the cumulative parameter functions is developed. Its properties are studied using the martingale structure of the problem. Furthermore, we introduce a partly parametric version of the general non-parametric model in which the influence of some of the covariates varies with time while the effects of the remaining covariates are constant. This semiparametric model has not been studied in detail before. An efficient procedure for estimating the parametric as well as the non-parametric components of this model is developed. Again the martingale structure of the model allows us to describe the asymptotic properties of the suggested estimators. The approach is applied to two different data sets, and a Monte Carlo simulation is presented.  相似文献   
5.
Markov Beta and Gamma Processes for Modelling Hazard Rates   总被引:1,自引:0,他引:1  
This paper generalizes the discrete time independent increment beta process of Hjort (1990 ), for modelling discrete failure times, and also generalizes the independent gamma process for modelling piecewise constant hazard rates ( Walker and Mallick, 1997 ). The generalizations are from independent increment to Markov increment prior processes allowing the modelling of smoothness. We derive posterior distributions and undertake a full Bayesian analysis.  相似文献   
6.
Modelling Heterogeneity With and Without the Dirichlet Process   总被引:4,自引:0,他引:4  
We investigate the relationships between Dirichlet process (DP) based models and allocation models for a variable number of components, based on exchangeable distributions. It is shown that the DP partition distribution is a limiting case of a Dirichlet–multinomial allocation model. Comparisons of posterior performance of DP and allocation models are made in the Bayesian paradigm and illustrated in the context of univariate mixture models. It is shown in particular that the unbalancedness of the allocation distribution, present in the prior DP model, persists a posteriori . Exploiting the model connections, a new MCMC sampler for general DP based models is introduced, which uses split/merge moves in a reversible jump framework. Performance of this new sampler relative to that of some traditional samplers for DP processes is then explored.  相似文献   
7.
Conjugacy as a Distinctive Feature of the Dirichlet Process   总被引:1,自引:1,他引:0  
Abstract.  Recently the class of normalized random measures with independent increments, which contains the Dirichlet process as a particular case, has been introduced. Here a new technique for deriving moments of these random probability measures is proposed. It is shown that, a priori , most of the appealing properties featured by the Dirichlet process are preserved. When passing to posterior computations, we obtain a characterization of the Dirichlet process as the only conjugate member of the whole class of normalized random measures with independent increments.  相似文献   
8.
Standard analysis for ranks from two‐way layout data with ties or for 'rank transformed' data with ties can be extended to allow market research to make better comparisons between products. In addition to detecting product average ranks effects, the new analysis allows detection of significant nonlinear effects, umbrella effects, linear contrasts and differences in distributions. The paper presents market research results comparing three types of french fries. There are no differences, according to the standard Friedman analysis. However significant nonlinear effects are found using the new analysis, which give the manufacturer important information.  相似文献   
9.
We consider a modelling approach to longitudinal data that aims at estimating flexible covariate effects in a model where the sampling probabilities are modelled explicitly. The joint modelling yields simple estimators that are easy to compute and analyse, even if the sampling of the longitudinal responses interacts with the response level. An incorrect model for the sampling probabilities results in biased estimates. Non-representative sampling occurs, for example, if patients with an extreme development (based on extreme values of the response) are called in for additional examinations and measurements. We allow covariate effects to be time-varying or time-constant. Estimates of covariate effects are obtained by solving martingale equations locally for the cumulative regression functions. Using Aalen's additive model for the sampling probabilities, we obtain simple expressions for the estimators and their asymptotic variances. The asymptotic distributions for the estimators of the non-parametric components as well as the parametric components of the model are derived drawing on general martingale results. Two applications are presented. We consider the growth of cystic fibrosis patients and the prothrombin index for liver cirrhosis patients. The conclusion about the growth of the cystic fibrosis patients is not altered when adjusting for a possible non-representativeness in the sampling, whereas we reach substantively different conclusions about the treatment effect for the liver cirrhosis patients.  相似文献   
10.
Summary.  In functional data analysis, curves or surfaces are observed, up to measurement error, at a finite set of locations, for, say, a sample of n individuals. Often, the curves are homogeneous, except perhaps for individual-specific regions that provide heterogeneous behaviour (e.g. 'damaged' areas of irregular shape on an otherwise smooth surface). Motivated by applications with functional data of this nature, we propose a Bayesian mixture model, with the aim of dimension reduction, by representing the sample of n curves through a smaller set of canonical curves. We propose a novel prior on the space of probability measures for a random curve which extends the popular Dirichlet priors by allowing local clustering: non-homogeneous portions of a curve can be allocated to different clusters and the n individual curves can be represented as recombinations (hybrids) of a few canonical curves. More precisely, the prior proposed envisions a conceptual hidden factor with k -levels that acts locally on each curve. We discuss several models incorporating this prior and illustrate its performance with simulated and real data sets. We examine theoretical properties of the proposed finite hybrid Dirichlet mixtures, specifically, their behaviour as the number of the mixture components goes to ∞ and their connection with Dirichlet process mixtures.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号