首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A traditional interpolation model is characterized by the choice of regularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant , and the noise model has a single parameter . The ratio / alone is responsible for determining globally all these attributes of the interpolant: its complexity, flexibility, smoothness, characteristic scale length, and characteristic amplitude. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of conditional convexity when designing models with many hyperparameters. We apply the new models to the interpolation of neuronal spike data and demonstrate a substantial improvement in generalization error.  相似文献   

2.
CHU  HUI-MAY  KUO  LYNN 《Statistics and Computing》1997,7(3):183-192
Bayesian methods for estimating the dose response curves with the one-hit model, the gamma multi-hit model, and their modified versions with Abbott's correction are studied. The Gibbs sampling approach with data augmentation and with the Metropolis algorithm is employed to compute the Bayes estimates of the potency curves. In addition, estimation of the relative additional risk and the virtually safe dose is studied. Model selection based on conditional predictive ordinates from cross-validated data is developed.  相似文献   

3.
Comparison of observed mortality with known, background, or standard rates has taken place for several hundred years. With the developments of regression models for survival data, an increasing interest has arisen in individualizing the standardisation using covariates of each individual. Also, account sometimes needs to be taken of random variation in the standard group.Emphasizing uses of the Cox regression model, this paper surveys a number of critical choices and pitfalls in this area. The methods are illustrated by comparing survival of liver patients after transplantation with survival after conservative treatment.  相似文献   

4.
A probabilistic expert system provides a graphical representation of a joint probability distribution which can be used to simplify and localize calculations. Jensenet al. (1990) introduced a flow-propagation algorithm for calculating marginal and conditional distributions in such a system. This paper analyses that algorithm in detail, and shows how it can be modified to perform other tasks, including maximization of the joint density and simultaneous fast retraction of evidence entered on several variables.  相似文献   

5.
Principal curves revisited   总被引:15,自引:0,他引:15  
A principal curve (Hastie and Stuetzle, 1989) is a smooth curve passing through the middle of a distribution or data cloud, and is a generalization of linear principal components. We give an alternative definition of a principal curve, based on a mixture model. Estimation is carried out through an EM algorithm. Some comparisons are made to the Hastie-Stuetzle definition.  相似文献   

6.
The paper presents non-standard methods in evolutionary computation and discusses their applicability to various optimization problems. These methods maintain populations of individuals with nonlinear chromosomal structure and use genetic operators enhanced by the problem specific knowledge.  相似文献   

7.
I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently developed method of simulated tempering, the tempered transition method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the inefficiency of a random walk, an advantage that is unfortunately cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling efficiency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are deceptive.  相似文献   

8.
We investigate the properties of several statistical tests for comparing treatment groups with respect to multivariate survival data, based on the marginal analysis approach introduced by Wei, Lin and Weissfeld [Regression Analysis of multivariate incomplete failure time data by modelling marginal distributians, JASA vol. 84 pp. 1065–1073]. We consider two types of directional tests, based on a constrained maximization and on linear combinations of the unconstrained maximizer of the working likelihood function, and the omnibus test arising from the same working likelihood. The directional tests are members of a larger class of tests, from which an asymptotically optimal test can be found. We compare the asymptotic powers of the tests under general contiguous alternatives for a variety of settings, and also consider the choice of the number of survival times to include in the multivariate outcome. We illustrate the results with simulations and with the results from a clinical trial examining recurring opportunistic infections in persons with HIV.  相似文献   

9.
Summary: The next German census will be an Administrative Record Census. Data from several administrative registers about persons will be merged. Object identification has to be applied, since no unique identification number exists in the registers. We present a two–step procedure. We briefly discuss questions like correctness and completeness of the Administrative Record Census. Then we focus on the object identification problem, that can be perceived as a special classification problem. Pairs of records are to be classified as matched or not matched. To achieve computational efficiency a preselection technique of pairs is applied. Our approach is illustrated with a database containing a large set of consumer addresses.*This work was partially supported by the Berlin–Brandenburg Graduate School in Distributed Information Systems (DFG grant no. GRK 316). The authors thank Michael Fürnrohr for previewing the paper. We would like to thank also for the helpful comments of an anonymous reviewer.  相似文献   

10.
The posterior distribution of the likelihood is used to interpret the evidential meaning of P-values, posterior Bayes factors and Akaike's information criterion when comparing point null hypotheses with composite alternatives. Asymptotic arguments lead to simple re-calibrations of these criteria in terms of posterior tail probabilities of the likelihood ratio. (Prior) Bayes factors cannot be calibrated in this way as they are model-specific.  相似文献   

11.
In this largely expository article, we highlight the significance of various types of dimension for obtaining uniform convergence results in probability theory and we demonstrate how these results lead to certain notions of generalization for classes of binary-valued and real-valued functions. We also present new results on the generalization ability of certain types of artificial neural networks with real output.  相似文献   

12.
We propose exploratory, easily implemented methods for diagnosing the appropriateness of an underlying copula model for bivariate failure time data, allowing censoring in either or both failure times. It is found that the proposed approach effectively distinguishes gamma from positive stable copula models when the sample is moderately large or the association is strong. Data from the Womens Health and Aging Study (WHAS, Guralnik et al., The Womenss Health and Aging Study: Health and Social Characterisitics of Older Women with Disability. National Institute on Aging: Bethesda, Mayland, 1995) are analyzed to demonstrate the proposed diagnostic methodology. The positive stable model gives a better overall fit to these data than the gamma frailty model, but it tends to underestimate association at the later time points. The finding is consistent with recent theory differentiating catastrophic from progressive disability onset in older adults. The proposed methods supply an interpretable quantity for copula diagnosis. We hope that they will usefully inform practitioners as to the reasonableness of their modeling choices.  相似文献   

13.
We present a new test for the presence of a normal mixture distribution, based on the posterior Bayes factor of Aitkin (1991). The new test has slightly lower power than the likelihood ratio test. It does not require the computation of the MLEs of the parameters or a search for multiple maxima, but requires computations based on classification likelihood assignments of observations to mixture components.  相似文献   

14.
Edgoose  T.  Allison  L. 《Statistics and Computing》1999,9(4):269-278
General purpose un-supervised classification programs have typically assumed independence between observations in the data they analyse. In this paper we report on an extension to the MML classifier Snob which enables the program to take advantage of some of the extra information implicit in ordered datasets (such as time-series). Specifically the data is modelled as if it were generated from a first order Markov process with as many states as there are classes of observation. The state of such a process at any point in the sequence determines the class from which the corresponding observation is generated. Such a model is commonly referred to as a Hidden Markov Model. The MML calculation for the expected length of a near optimal two-part message stating a specific model of this type and a dataset given this model is presented. Such an estimate enables us to fairly compare models which differ in the number of classes they specify which in turn can guide a robust un-supervised search of the model space. The new program, tSnob, is tested against both synthetic data and a large real world dataset and is found to make unbiased estimates of model parameters and to conduct an effective search of the extended model space.  相似文献   

15.
A probabilistic expert system provides a graphical representation of a joint probability distribution which enables local computations of probabilities. Dawid (1992) provided a flow- propagation algorithm for finding the most probable configuration of the joint distribution in such a system. This paper analyses that algorithm in detail, and shows how it can be combined with a clever partitioning scheme to formulate an efficient method for finding the M most probable configurations. The algorithm is a divide and conquer technique, that iteratively identifies the M most probable configurations.  相似文献   

16.
Zusammenfassung: Vermögenspreise im Allgemeinen und Immobilienpreise im Besonderen gewannen in den zurückliegenden Jahren mehr und mehr an Bedeutung. Während sie in den späten 80er Jahren (nach dem Börsencrash im Herbst 1987) und im vergangenen Jahrzehnt vornehmlich unter dem Schlagwort asset-price inflation/deflation betrachtet wurden, stehen neuerdings die Tragfähigkeit und Bestandsfestigkeit der Finanzsysteme im Vordergrund. In den Ausführungen geht es vor allem um die Frage, warum, seit wann und aufgrund welcher Grunddaten die Deutsche Bundesbank auf diesem Gebiet der Preisstatistik tätig geworden ist. Dabei wird nicht nur auf das hohe Maß an Unsicherheit in den vorgelegten Angaben hingewiesen, sondern auch der Second–Best–Charakter der Berechnungen hervorgehoben.
Summary: Asset prices in general and property prices in particular have gained increasing importance in recent years. Whereas in the late 1980s (after the stock market crash in autumn 1987) and in the last decade these prices mainly came under the heading of asset-price inflation/deflation, the focus has recently shifted to sustainable and viable financial systems. The notes primarily explain why the Bundesbank is involved in this area of price statistics, when this involvement began and what underlying data the Bundesbank uses. At the same time, they not only indicate the large degree of uncertainty in the reported data but also highlight the second-best nature of the calculations.
*Vortrag anlässlich der 9. Konferenz Messen der Teuerung am 17./18. Juni 2004 in Marburg. Der Verfasser gibt seine persönliche Auffassung wieder, die nicht unbedingt mit derjenigen der Deutschen Bundesbank übereinstimmen muss.  相似文献   

17.
Software which allows interactive exploration of graphical displays is widely available. In addition there now exist sophisticated authoring tools which allow more general textual and graphical material to be presented in computer-based form. The role of an authoring tool in providing a graphical interface to a strategy for solving simple statistical problems in the context of teaching is discussed. This interface allows a variety of resources to be integrated. Specific examples, including the use of dynamic graphical displays in exploring data and in communicating the meaning of a model, are proposed. These ideas are illustrated by a problem involving the identification of the sex of a herring gull.  相似文献   

18.
Simple boundary correction for kernel density estimation   总被引:8,自引:0,他引:8  
If a probability density function has bounded support, kernel density estimates often overspill the boundaries and are consequently especially biased at and near these edges. In this paper, we consider the alleviation of this boundary problem. A simple unified framework is provided which covers a number of straightforward methods and allows for their comparison: generalized jackknifing generates a variety of simple boundary kernel formulae. A well-known method of Rice (1984) is a special case. A popular linear correction method is another: it has close connections with the boundary properties of local linear fitting (Fan and Gijbels, 1992). Links with the optimal boundary kernels of Müller (1991) are investigated. Novel boundary kernels involving kernel derivatives and generalized reflection arise too. In comparisons, various generalized jackknifing methods perform rather similarly, so this, together with its existing popularity, make linear correction as good a method as any. In an as yet unsuccessful attempt to improve on generalized jackknifing, a variety of alternative approaches is considered. A further contribution is to consider generalized jackknife boundary correction for density derivative estimation. En route to all this, a natural analogue of local polynomial regression for density estimation is defined and discussed.  相似文献   

19.
Summary: We describe depth–based graphical displays that show the interdependence of multivariate distributions. The plots involve one–dimensional curves or bivariate scatterplots, so they are easier to interpret than correlation matrices. The correlation curve, modelled on the scale curve of Liu et al. (1999), compares the volume of the observed central regions with the volume under independence. The correlation DD–plot is the scatterplot of depth values under a reference distribution against depth values under independence. The area of the plot gives a measure of distance from independence. Correlation curve and DD-plot require an independence model as a baseline: Besides classical parametric specifications, a nonparametric estimator, derived from the randomization principle, is used. Combining data depth and the notion of quadrant dependence, quadrant correlation trajectories are obtained which allow simultaneous representation of subsets of variables. The properties of the plots for the multivariate normal distribution are investigated. Some real data examples are illustrated. *This work was completed with the support of Ca Foscari University.  相似文献   

20.
We introduce a simple combinatorial scheme for systematically running through a complete enumeration of sample reuse procedures such as the bootstrap, Hartigan's subsets, and various permutation tests. The scheme is based on Gray codes which give tours through various spaces, changing only one or two points at a time. We use updating algorithms to avoid recomputing statistics and achieve substantial speedups. Several practical examples and computer codes are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号