首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In some situations the asymptotic distribution of a random function T n() that depends on a nuisance parameter is tractable when has known value. In that case it can be used as a test statistic, if suitably constructed, for some hypothesis. However, in practice, often needs to be replaced by an estimator S n. In this paper general results are given concerning the asymptotic distribution of T n(S n) that include special cases previously dealt with. In particular, some situations are covered where the usual likelihood theory is nonregular and extreme values are employed to construct estimators and test statistics.  相似文献   

2.
A new area of research interest is the computation of exact confidence limits or intervals for a scalar parameter of interest from discrete data by inverting a hypothesis test based on a studentized test statistic. See, for example, Chan and Zhang (1999), Agresti and Min (2001) and Agresti (2003) who deal with a difference of binomial probabilities and Agresti and Min (2002) who deal with an odds ratio. However, neither (1) a detailed analysis of the computational issues involved nor (2) a reliable method of computation that deals effectively with these issues is currently available. In this paper we solve these two problems for a very broad class of discrete data models. We suppose that the distribution of the data is determined by (,) where is a nuisance parameter vector. We also consider six different studentized test statistics. Our contributions to (1) are as follows. We show that the P-value resulting from the hypothesis test, considered as a function of the null-hypothesized value of , has both jump and drop discontinuities. Numerical examples are used to demonstrate that these discontinuities lead to the failure of simple-minded approaches to the computation of the confidence limit or interval. We also provide a new method for efficiently computing the set of all possible locations of these discontinuities. Our contribution to (2) is to provide a new and reliable method of computing the confidence limit or interval, based on the knowledge of this set.  相似文献   

3.
Computing location depth and regression depth in higher dimensions   总被引:3,自引:0,他引:3  
The location depth (Tukey 1975) of a point relative to a p-dimensional data set Z of size n is defined as the smallest number of data points in a closed halfspace with boundary through . For bivariate data, it can be computed in O(nlogn) time (Rousseeuw and Ruts 1996). In this paper we construct an exact algorithm to compute the location depth in three dimensions in O(n2logn) time. We also give an approximate algorithm to compute the location depth in p dimensions in O(mp3+mpn) time, where m is the number of p-subsets used.Recently, Rousseeuw and Hubert (1996) defined the depth of a regression fit. The depth of a hyperplane with coefficients (1,...,p) is the smallest number of residuals that need to change sign to make (1,...,p) a nonfit. For bivariate data (p=2) this depth can be computed in O(nlogn) time as well. We construct an algorithm to compute the regression depth of a plane relative to a three-dimensional data set in O(n2logn) time, and another that deals with p=4 in O(n3logn) time. For data sets with large n and/or p we propose an approximate algorithm that computes the depth of a regression fit in O(mp3+mpn+mnlogn) time. For all of these algorithms, actual implementations are made available.  相似文献   

4.
When constructing uniform random numbers in [0, 1] from the output of a physical device, usually n independent and unbiased bits B j are extracted and combined into the machine number . In order to reduce the number of data used to build one real number, we observe that for independent and exponentially distributed random variables X n (which arise for example as waiting times between two consecutive impulses of a Geiger counter) the variable U n : = X 2n – 1/(X 2n – 1 + X 2n ) is uniform in [0, 1]. In the practical application X n can only be measured up to a given precision (in terms of the expectation of the X n ); it is shown that the distribution function obtained by calculating U n from these measurements differs from the uniform by less than /2.We compare this deviation with the error resulting from the use of biased bits B j with P {B j = 1{ = (where ] – [) in the construction of Y above. The influence of a bias is given by the estimate that in the p-total variation norm Q TV p = ( |Q()| p )1/p (p 1) we have P Y P 0 Y TV p (c n · )1/p with c n p for n . For the distribution function F Y F 0 Y 2(1 – 2n )|| holds.  相似文献   

5.
A traditional interpolation model is characterized by the choice of regularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant , and the noise model has a single parameter . The ratio / alone is responsible for determining globally all these attributes of the interpolant: its complexity, flexibility, smoothness, characteristic scale length, and characteristic amplitude. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of conditional convexity when designing models with many hyperparameters. We apply the new models to the interpolation of neuronal spike data and demonstrate a substantial improvement in generalization error.  相似文献   

6.
The generalized odds-rate class of regression models for time to event data is indexed by a non-negative constant and assumes thatg(S(t|Z)) = (t) + Zwhere g(s) = log(-1(s-) for > 0, g0(s) = log(- log s), S(t|Z) is the survival function of the time to event for an individual with qx1 covariate vector Z, is a qx1 vector of unknown regression parameters, and (t) is some arbitrary increasing function of t. When =0, this model is equivalent to the proportional hazards model and when =1, this model reduces to the proportional odds model. In the presence of right censoring, we construct estimators for and exp((t)) and show that they are consistent and asymptotically normal. In addition, we show that the estimator for is semiparametric efficient in the sense that it attains the semiparametric variance bound.  相似文献   

7.
Summary: We describe depth–based graphical displays that show the interdependence of multivariate distributions. The plots involve one–dimensional curves or bivariate scatterplots, so they are easier to interpret than correlation matrices. The correlation curve, modelled on the scale curve of Liu et al. (1999), compares the volume of the observed central regions with the volume under independence. The correlation DD–plot is the scatterplot of depth values under a reference distribution against depth values under independence. The area of the plot gives a measure of distance from independence. Correlation curve and DD-plot require an independence model as a baseline: Besides classical parametric specifications, a nonparametric estimator, derived from the randomization principle, is used. Combining data depth and the notion of quadrant dependence, quadrant correlation trajectories are obtained which allow simultaneous representation of subsets of variables. The properties of the plots for the multivariate normal distribution are investigated. Some real data examples are illustrated. *This work was completed with the support of Ca Foscari University.  相似文献   

8.
Multi-layer perceptrons (MLPs), a common type of artificial neural networks (ANNs), are widely used in computer science and engineering for object recognition, discrimination and classification, and have more recently found use in process monitoring and control. Training such networks is not a straightforward optimisation problem, and we examine features of these networks which contribute to the optimisation difficulty.Although the original perceptron, developed in the late 1950s (Rosenblatt 1958, Widrow and Hoff 1960), had a binary output from each node, this was not compatible with back-propagation and similar training methods for the MLP. Hence the output of each node (and the final network output) was made a differentiable function of the network inputs. We reformulate the MLP model with the original perceptron in mind so that each node in the hidden layers can be considered as a latent (that is, unobserved) Bernoulli random variable. This maintains the property of binary output from the nodes, and with an imposed logistic regression of the hidden layer nodes on the inputs, the expected output of our model is identical to the MLP output with a logistic sigmoid activation function (for the case of one hidden layer).We examine the usual MLP objective function—the sum of squares—and show its multi-modal form and the corresponding optimisation difficulty. We also construct the likelihood for the reformulated latent variable model and maximise it by standard finite mixture ML methods using an EM algorithm, which provides stable ML estimates from random starting positions without the need for regularisation or cross-validation. Over-fitting of the number of nodes does not affect this stability. This algorithm is closely related to the EM algorithm of Jordan and Jacobs (1994) for the Mixture of Experts model.We conclude with some general comments on the relation between the MLP and latent variable models.  相似文献   

9.
When simulating a dynamical system, the computation is actually of a spatially discretized system, because finite machine arithmetic replaces continuum state space. For chaotic dynamical systems, the discretized simulations often have collapsing effects, to a fixed point or to short cycles. Statistical properties of these phenomena can be modelled with random mappings with an absorbing centre. The model gives results which are very much in line with computational experiments. The effects are discussed with special reference to the family of mappings f (x)=1-|1-2x|,x [0,1],1,<,,<,. Computer experiments show close agreement with predictions of the model.  相似文献   

10.
We propose exploratory, easily implemented methods for diagnosing the appropriateness of an underlying copula model for bivariate failure time data, allowing censoring in either or both failure times. It is found that the proposed approach effectively distinguishes gamma from positive stable copula models when the sample is moderately large or the association is strong. Data from the Womens Health and Aging Study (WHAS, Guralnik et al., The Womenss Health and Aging Study: Health and Social Characterisitics of Older Women with Disability. National Institute on Aging: Bethesda, Mayland, 1995) are analyzed to demonstrate the proposed diagnostic methodology. The positive stable model gives a better overall fit to these data than the gamma frailty model, but it tends to underestimate association at the later time points. The finding is consistent with recent theory differentiating catastrophic from progressive disability onset in older adults. The proposed methods supply an interpretable quantity for copula diagnosis. We hope that they will usefully inform practitioners as to the reasonableness of their modeling choices.  相似文献   

11.
Simple, closed form saddlepoint approximations for the distribution and density of the singly and doubly noncentral F distributions are presented. Their overwhelming accuracy is demonstrated numerically using a variety of parameter values. The approximations are shown to be uniform in the right tail and the associated limitating relative error is derived. Difficulties associated with some algorithms used for exact computation of the singly noncentral F are noted.  相似文献   

12.
Evolution strategies (ESs) are a special class of probabilistic, direct, global optimization methods. They are similar to genetic algorithms but work in continuous spaces and have the additional capability of self-adapting their major strategy parameters. This paper presents the most important features of ESs, namely their self-adaptation, as well as their robustness and potential for parallelization which they share with other evolutionary algorithms.Besides the early (1 + 1)-ES and its underlying theoretical results, the modern ( + )-ES and (, )-ES are presented with special emphasis on the self-adaptation of strategy parameters, a mechanism which enables the algorithm to evolve not only the object variables but also the characteristics of the probability distributions of normally distributed mutations. The self-adaptation property of the algorithm is also illustrated by an experimental example.The robustness of ESs is demonstrated for noisy fitness evaluations and by its application to discrete optimization problems, namely the travelling salesman problem (TSP).Finally, the paper concludes by summarizing existing work and general possibilities regarding the parallelization of evolution strategies and evolutionary algorithms in general.  相似文献   

13.
The K principal points of a p-variate random variable X are defined as those points 1,..., K which minimize the expected squared distance of X from the nearest of the k . This paper reviews some of the theory of principal points and presents a method of determining principal points of univariate continuous distributions. The method is applied to the uniform distribution, to the normal distribution and to the exponential distribution.  相似文献   

14.
In this paper, we reconsider the well-known oblique Procrustes problem where the usual least-squares objective function is replaced by a more robust discrepancy measure, based on the 1 norm or smooth approximations of it.We propose two approaches to the solution of this problem. One approach is based on convex analysis and uses the structure of the problem to permit a solution to the 1 norm problem. An alternative approach is to smooth the problem by working with smooth approximations to the 1 norm, and this leads to a solution process based on the solution of ordinary differential equations on manifolds. The general weighted Procrustes problem (both orthogonal and oblique) can also be solved by the latter approach. Numerical examples to illustrate the algorithms which have been developed are reported and analyzed.  相似文献   

15.
Jerome H. Friedman and Nicholas I. Fisher   总被引:1,自引:0,他引:1  
Many data analytic questions can be formulated as (noisy) optimization problems. They explicitly or implicitly involve finding simultaneous combinations of values for a set of (input) variables that imply unusually large (or small) values of another designated (output) variable. Specifically, one seeks a set of subregions of the input variable space within which the value of the output variable is considerably larger (or smaller) than its average value over the entire input domain. In addition it is usually desired that these regions be describable in an interpretable form involving simple statements (rules) concerning the input values. This paper presents a procedure directed towards this goal based on the notion of patient rule induction. This patient strategy is contrasted with the greedy ones used by most rule induction methods, and semi-greedy ones used by some partitioning tree techniques such as CART. Applications involving scientific and commercial data bases are presented.  相似文献   

16.
Each cell of a two-dimensional lattice is painted one of colors, arranged in a color wheel. The colors advance (k tok+1 mod ) either automatically or by contact with at least a threshold number of successor colors in a prescribed local neighborhood. Discrete-time parallel systems of this sort in which color 0 updates by contact and the rest update automatically are called Greenberg-Hastings (GH) rules. A system in which all colors update by contact is called a cyclic cellular automation (CCA). Started from appropriate initial conditions, these models generate periodic traveling waves. Started from random configurations the same rules exhibit complex self-organization, typically characterized by nucleation of locally periodic ram's horns or spirals. Corresponding random processes give rise to a variety of forest fire equilibria that display large-scale stochastic wave fronts. This paper describes a framework, theoretically based, but relying on extensive interactive computer graphics experimentation, for investigation of the complex dynamics shared by excitable media in a broad spectrum of scientific contexts. By focusing on simple mathematical prototypes we hope to obtain a better understanding of the basic organizational principles underlying spatially distributed oscillating systems.  相似文献   

17.
Convergence assessment techniques for Markov chain Monte Carlo   总被引:7,自引:0,他引:7  
MCMC methods have effectively revolutionised the field of Bayesian statistics over the past few years. Such methods provide invaluable tools to overcome problems with analytic intractability inherent in adopting the Bayesian approach to statistical modelling.However, any inference based upon MCMC output relies critically upon the assumption that the Markov chain being simulated has achieved a steady state or converged. Many techniques have been developed for trying to determine whether or not a particular Markov chain has converged, and this paper aims to review these methods with an emphasis on the mathematics underpinning these techniques, in an attempt to summarise the current state-of-play for convergence assessment techniques and to motivate directions for future research in this area.  相似文献   

18.
Zusammenfassung: Vermögenspreise im Allgemeinen und Immobilienpreise im Besonderen gewannen in den zurückliegenden Jahren mehr und mehr an Bedeutung. Während sie in den späten 80er Jahren (nach dem Börsencrash im Herbst 1987) und im vergangenen Jahrzehnt vornehmlich unter dem Schlagwort asset-price inflation/deflation betrachtet wurden, stehen neuerdings die Tragfähigkeit und Bestandsfestigkeit der Finanzsysteme im Vordergrund. In den Ausführungen geht es vor allem um die Frage, warum, seit wann und aufgrund welcher Grunddaten die Deutsche Bundesbank auf diesem Gebiet der Preisstatistik tätig geworden ist. Dabei wird nicht nur auf das hohe Maß an Unsicherheit in den vorgelegten Angaben hingewiesen, sondern auch der Second–Best–Charakter der Berechnungen hervorgehoben.
Summary: Asset prices in general and property prices in particular have gained increasing importance in recent years. Whereas in the late 1980s (after the stock market crash in autumn 1987) and in the last decade these prices mainly came under the heading of asset-price inflation/deflation, the focus has recently shifted to sustainable and viable financial systems. The notes primarily explain why the Bundesbank is involved in this area of price statistics, when this involvement began and what underlying data the Bundesbank uses. At the same time, they not only indicate the large degree of uncertainty in the reported data but also highlight the second-best nature of the calculations.
*Vortrag anlässlich der 9. Konferenz Messen der Teuerung am 17./18. Juni 2004 in Marburg. Der Verfasser gibt seine persönliche Auffassung wieder, die nicht unbedingt mit derjenigen der Deutschen Bundesbank übereinstimmen muss.  相似文献   

19.
Consider a set of points in the plane with Gaussian perturbations about a regular mean configuration in which a Delaunay triangulation of the mean of the process is comprised of equilateral triangles of the same size. The points are labelled at random as black or white with variances of the perturbations possibly dependent on the colour. By investigating triangle subsets (with four sets of possible colour labels for the vertices) in detail we propose various test statistics based on a Procrustes shape analysis. A simulation study is carried out to investigate the relative merits and the adequacy of the approximations used in the distributional results, as well as a comparison with simulation methods based on nearest-neighbour distances. The methodology is applied to an investigation of regularity in human muscle fibre cross-sections.  相似文献   

20.
The paper presents non-standard methods in evolutionary computation and discusses their applicability to various optimization problems. These methods maintain populations of individuals with nonlinear chromosomal structure and use genetic operators enhanced by the problem specific knowledge.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号