首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The difficulties of assessing details of the shape of a bivariate distribution, and of contrasting subgroups, from a raw scatterplot are discussed. The use of contours of a density estimate in highlighting features of distributional shape is illustrated on data on the development of aircraft technology. The estimated density height at each observation imposes an ordering on the data which can be used to select contours which contain specified proportions of the sample. This leads to a display which is reminiscent of a boxplot and which allows simple but effective comparison of different groups. Some simple properties of this technique are explored.Interesting features of a distribution such as arms and multimodality are found along the directions where the largest probability mass is located. These directions can be quantified through the modes of a density estimate based on the direction of each observation.  相似文献   

2.
A traditional interpolation model is characterized by the choice of regularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant , and the noise model has a single parameter . The ratio / alone is responsible for determining globally all these attributes of the interpolant: its complexity, flexibility, smoothness, characteristic scale length, and characteristic amplitude. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of conditional convexity when designing models with many hyperparameters. We apply the new models to the interpolation of neuronal spike data and demonstrate a substantial improvement in generalization error.  相似文献   

3.
Simple boundary correction for kernel density estimation   总被引:8,自引:0,他引:8  
If a probability density function has bounded support, kernel density estimates often overspill the boundaries and are consequently especially biased at and near these edges. In this paper, we consider the alleviation of this boundary problem. A simple unified framework is provided which covers a number of straightforward methods and allows for their comparison: generalized jackknifing generates a variety of simple boundary kernel formulae. A well-known method of Rice (1984) is a special case. A popular linear correction method is another: it has close connections with the boundary properties of local linear fitting (Fan and Gijbels, 1992). Links with the optimal boundary kernels of Müller (1991) are investigated. Novel boundary kernels involving kernel derivatives and generalized reflection arise too. In comparisons, various generalized jackknifing methods perform rather similarly, so this, together with its existing popularity, make linear correction as good a method as any. In an as yet unsuccessful attempt to improve on generalized jackknifing, a variety of alternative approaches is considered. A further contribution is to consider generalized jackknife boundary correction for density derivative estimation. En route to all this, a natural analogue of local polynomial regression for density estimation is defined and discussed.  相似文献   

4.
I present a new Markov chain sampling method appropriate for distributions with isolated modes. Like the recently developed method of simulated tempering, the tempered transition method uses a series of distributions that interpolate between the distribution of interest and a distribution for which sampling is easier. The new method has the advantage that it does not require approximate values for the normalizing constants of these distributions, which are needed for simulated tempering, and can be tedious to estimate. Simulated tempering performs a random walk along the series of distributions used. In contrast, the tempered transitions of the new method move systematically from the desired distribution, to the easily-sampled distribution, and back to the desired distribution. This systematic movement avoids the inefficiency of a random walk, an advantage that is unfortunately cancelled by an increase in the number of interpolating distributions required. Because of this, the sampling efficiency of the tempered transition method in simple problems is similar to that of simulated tempering. On more complex distributions, however, simulated tempering and tempered transitions may perform differently. Which is better depends on the ways in which the interpolating distributions are deceptive.  相似文献   

5.
Let X, T, Y be random vectors such that the distribution of Y conditional on covariates partitioned into the vectors X = x and T = t is given by f(y; x, ), where = (, (t)). Here is a parameter vector and (t) is a smooth, real–valued function of t. The joint distribution of X and T is assumed to be independent of and . This semiparametric model is called conditionally parametric because the conditional distribution f(y; x, ) of Y given X = x, T = t is parameterized by a finite dimensional parameter = (, (t)). Severini and Wong (1992. Annals of Statistics 20: 1768–1802) show how to estimate and (·) using generalized profile likelihoods, and they also provide a review of the literature on generalized profile likelihoods. Under specified regularity conditions, they derive an asymptotically efficient estimator of and a uniformly consistent estimator of (·). The purpose of this paper is to provide a short tutorial for this method of estimation under a likelihood–based model, reviewing results from Stein (1956. Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, University of California Press, Berkeley, pp. 187–196), Severini (1987. Ph.D Thesis, The University of Chicago, Department of Statistics, Chicago, Illinois), and Severini and Wong (op. cit.).  相似文献   

6.
CHU  HUI-MAY  KUO  LYNN 《Statistics and Computing》1997,7(3):183-192
Bayesian methods for estimating the dose response curves with the one-hit model, the gamma multi-hit model, and their modified versions with Abbott's correction are studied. The Gibbs sampling approach with data augmentation and with the Metropolis algorithm is employed to compute the Bayes estimates of the potency curves. In addition, estimation of the relative additional risk and the virtually safe dose is studied. Model selection based on conditional predictive ordinates from cross-validated data is developed.  相似文献   

7.
Kernel density classification and boosting: an L2 analysis   总被引:1,自引:0,他引:1  
Kernel density estimation is a commonly used approach to classification. However, most of the theoretical results for kernel methods apply to estimation per se and not necessarily to classification. In this paper we show that when estimating the difference between two densities, the optimal smoothing parameters are increasing functions of the sample size of the complementary group, and we provide a small simluation study which examines the relative performance of kernel density methods when the final goal is classification.A relative newcomer to the classification portfolio is boosting, and this paper proposes an algorithm for boosting kernel density classifiers. We note that boosting is closely linked to a previously proposed method of bias reduction in kernel density estimation and indicate how it will enjoy similar properties for classification. We show that boosting kernel classifiers reduces the bias whilst only slightly increasing the variance, with an overall reduction in error. Numerical examples and simulations are used to illustrate the findings, and we also suggest further areas of research.  相似文献   

8.
We propose exploratory, easily implemented methods for diagnosing the appropriateness of an underlying copula model for bivariate failure time data, allowing censoring in either or both failure times. It is found that the proposed approach effectively distinguishes gamma from positive stable copula models when the sample is moderately large or the association is strong. Data from the Womens Health and Aging Study (WHAS, Guralnik et al., The Womenss Health and Aging Study: Health and Social Characterisitics of Older Women with Disability. National Institute on Aging: Bethesda, Mayland, 1995) are analyzed to demonstrate the proposed diagnostic methodology. The positive stable model gives a better overall fit to these data than the gamma frailty model, but it tends to underestimate association at the later time points. The finding is consistent with recent theory differentiating catastrophic from progressive disability onset in older adults. The proposed methods supply an interpretable quantity for copula diagnosis. We hope that they will usefully inform practitioners as to the reasonableness of their modeling choices.  相似文献   

9.
A probabilistic expert system provides a graphical representation of a joint probability distribution which can be used to simplify and localize calculations. Jensenet al. (1990) introduced a flow-propagation algorithm for calculating marginal and conditional distributions in such a system. This paper analyses that algorithm in detail, and shows how it can be modified to perform other tasks, including maximization of the joint density and simultaneous fast retraction of evidence entered on several variables.  相似文献   

10.
Local linear curve estimators are typically constructed using a compactly supported kernel, which minimizes edge effects and (in the case of the Epanechnikov kernel) optimizes asymptotic performance in a mean square sense. The use of compactly supported kernels can produce numerical problems, however. A common remedy is ridging, which may be viewed as shrinkage of the local linear estimator towards the origin. In this paper we propose a general form of shrinkage, and suggest that, in practice, shrinkage be towards a proper curve estimator. For the latter we propose a local linear estimator based on an infinitely supported kernel. This approach is resistant against selection of too large a shrinkage parameter, which can impair performance when shrinkage is towards the origin. It also removes problems of numerical instability resulting from using a compactly supported kernel, and enjoys very good mean squared error properties.  相似文献   

11.
It is well-known that multivariate curve estimation suffers from the curse of dimensionality. However, reasonable estimators are possible, even in several dimensions, under appropriate restrictions on the complexity of the curve. In the present paper we explore how much appropriate wavelet estimators can exploit a typical restriction on the curve such as additivity. We first propose an adaptive and simultaneous estimation procedure for all additive components in additive regression models and discuss rate of convergence results and data-dependent truncation rules for wavelet series estimators. To speed up computation we then introduce a wavelet version of functional ANOVA algorithm for additive regression models and propose a regularization algorithm which guarantees an adaptive solution to the multivariate estimation problem. Some simulations indicate that wavelets methods complement nicely the existing methodology for nonparametric multivariate curve estimation.  相似文献   

12.
Generalized Hyperbolic distribution (Barndorff-Nielsen 1977) is a variance-mean mixture of a normal distribution with the Generalized Inverse Gaussian distribution. Recently subclasses of these distributions (e.g., the hyperbolic distribution and the Normal Inverse Gaussian distribution) have been applied to construct stochastic processes in turbulence and particularly in finance, where multidimensional problems are of special interest. Parameter estimation for these distributions based on an i.i.d. sample is a difficult task even for a specified one-dimensional subclass (subclass being uniquely defined by ) and relies on numerical methods. For the hyperbolic subclass ( = 1), computer program hyp (Blæsild and Sørensen 1992) estimates parameters via ML when the dimensionality is less than or equal to three. To the best of the author's knowledge, no successful attempts have been made to fit any given subclass when the dimensionality is greater than three. This article proposes a simple EM-based (Dempster, Laird and Rubin 1977) ML estimation procedure to estimate parameters of the distribution when the subclass is known regardless of the dimensionality. Our method relies on the ability to numerically evaluate modified Bessel functions of the third kind and their logarithms, which is made possible by currently available software. The method is applied to fit the five dimensional Normal Inverse Gaussian distribution to a series of returns on foreign exchange rates.  相似文献   

13.
Each cell of a two-dimensional lattice is painted one of colors, arranged in a color wheel. The colors advance (k tok+1 mod ) either automatically or by contact with at least a threshold number of successor colors in a prescribed local neighborhood. Discrete-time parallel systems of this sort in which color 0 updates by contact and the rest update automatically are called Greenberg-Hastings (GH) rules. A system in which all colors update by contact is called a cyclic cellular automation (CCA). Started from appropriate initial conditions, these models generate periodic traveling waves. Started from random configurations the same rules exhibit complex self-organization, typically characterized by nucleation of locally periodic ram's horns or spirals. Corresponding random processes give rise to a variety of forest fire equilibria that display large-scale stochastic wave fronts. This paper describes a framework, theoretically based, but relying on extensive interactive computer graphics experimentation, for investigation of the complex dynamics shared by excitable media in a broad spectrum of scientific contexts. By focusing on simple mathematical prototypes we hope to obtain a better understanding of the basic organizational principles underlying spatially distributed oscillating systems.  相似文献   

14.
Summary: Data depth is a concept that measures the centrality of a point in a given data cloud x 1, x 2,...,x n or in a multivariate distribution P X on d d . Every depth defines a family of so–called trimmed regions. The –trimmed region is given by the set of points that have a depth of at least . Data depth has been used to define multivariate measures of location and dispersion as well as multivariate dispersion orders.If the depth of a point can be represented as the minimum of the depths with respect to all unidimensional projections, we say that the depth satisfies the (weak) projection property. Many depths which have been proposed in the literature can be shown to satisfy the weak projection property. A depth is said to satisfy the strong projection property if for every the unidimensional projection of the –trimmed region equals the –trimmed region of the projected distribution.After a short introduction into the general concept of data depth we formally define the weak and the strong projection property and give necessary and sufficient criteria for the projection property to hold. We further show that the projection property facilitates the construction of depths from univariate trimmed regions. We discuss some of the depths proposed in the literature which possess the projection property and define a general class of projection depths, which are constructed from univariate trimmed regions by using the above method.Finally, algorithmic aspects of projection depths are discussed. We describe an algorithm which enables the approximate computation of depths that satisfy the projection property.  相似文献   

15.
Models are considered in which true lifetimes are generated by a Weibull regression model and measured lifetimes are determined from the true times by certain measurement error models. Adjusted estimators are obtained under one parametric specification. The bias properties of these estimators and standard estimators are compared both theoretically, using small measurement error asymptotics, and by simulation. The standard estimators of regression coefficients, other than the intercept, are bias-robust. The adjusted estimator of the shape parameter removes the bias of the standard estimator.  相似文献   

16.
A new area of research interest is the computation of exact confidence limits or intervals for a scalar parameter of interest from discrete data by inverting a hypothesis test based on a studentized test statistic. See, for example, Chan and Zhang (1999), Agresti and Min (2001) and Agresti (2003) who deal with a difference of binomial probabilities and Agresti and Min (2002) who deal with an odds ratio. However, neither (1) a detailed analysis of the computational issues involved nor (2) a reliable method of computation that deals effectively with these issues is currently available. In this paper we solve these two problems for a very broad class of discrete data models. We suppose that the distribution of the data is determined by (,) where is a nuisance parameter vector. We also consider six different studentized test statistics. Our contributions to (1) are as follows. We show that the P-value resulting from the hypothesis test, considered as a function of the null-hypothesized value of , has both jump and drop discontinuities. Numerical examples are used to demonstrate that these discontinuities lead to the failure of simple-minded approaches to the computation of the confidence limit or interval. We also provide a new method for efficiently computing the set of all possible locations of these discontinuities. Our contribution to (2) is to provide a new and reliable method of computing the confidence limit or interval, based on the knowledge of this set.  相似文献   

17.
Comparison of observed mortality with known, background, or standard rates has taken place for several hundred years. With the developments of regression models for survival data, an increasing interest has arisen in individualizing the standardisation using covariates of each individual. Also, account sometimes needs to be taken of random variation in the standard group.Emphasizing uses of the Cox regression model, this paper surveys a number of critical choices and pitfalls in this area. The methods are illustrated by comparing survival of liver patients after transplantation with survival after conservative treatment.  相似文献   

18.
We present a new test for the presence of a normal mixture distribution, based on the posterior Bayes factor of Aitkin (1991). The new test has slightly lower power than the likelihood ratio test. It does not require the computation of the MLEs of the parameters or a search for multiple maxima, but requires computations based on classification likelihood assignments of observations to mixture components.  相似文献   

19.
In the exponential regression model, Bayesian inference concerning the non-linear regression parameter has proved extremely difficult. In particular, standard improper diffuse priors for the usual parameters lead to an improper posterior for the non-linear regression parameter. In a recent paper Ye and Berger (1991) applied the reference prior approach of Bernardo (1979) and Berger and Bernardo (1989) yielding a proper informative prior for . This prior depends on the values of the explanatory variable, goes to 0 as goes to 1, and depends on the specification of a hierarchical ordering of importance of the parameters.This paper explains the failure of the uniform prior to give a proper posterior: the reason is the appearance of the determinant of the information matrix in the posterior density for . We apply the posterior Bayes factor approach of Aitkin (1991) to this problem; in this approach we integrate out nuisance parameters with respect to their conditional posterior density given the parameter of interest. The resulting integrated likelihood for requires only the standard diffuse prior for all the parameters, and is unaffected by orderings of importance of the parameters. Computation of the likelihood for is extremely simple. The approach is applied to the three examples discussed by Berger and Ye and the likelihoods compared with their posterior densities.  相似文献   

20.
Jerome H. Friedman and Nicholas I. Fisher   总被引:1,自引:0,他引:1  
Many data analytic questions can be formulated as (noisy) optimization problems. They explicitly or implicitly involve finding simultaneous combinations of values for a set of (input) variables that imply unusually large (or small) values of another designated (output) variable. Specifically, one seeks a set of subregions of the input variable space within which the value of the output variable is considerably larger (or smaller) than its average value over the entire input domain. In addition it is usually desired that these regions be describable in an interpretable form involving simple statements (rules) concerning the input values. This paper presents a procedure directed towards this goal based on the notion of patient rule induction. This patient strategy is contrasted with the greedy ones used by most rule induction methods, and semi-greedy ones used by some partitioning tree techniques such as CART. Applications involving scientific and commercial data bases are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号