首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
This paper compares methods of estimation for the parameters of a Pareto distribution of the first kind to determine which method provides the better estimates when the observations are censored, The unweighted least squares (LS) and the maximum likelihood estimates (MLE) are presented for both censored and uncensored data. The MLE's are obtained using two methods, In the first, called the ML method, it is shown that log-likelihood is maximized when the scale parameter is the minimum sample value. In the second method, called the modified ML (MML) method, the estimates are found by utilizing the maximum likelihood value of the shape parameter in terms of the scale parameter and the equation for the mean of the first order statistic as a function of both parameters. Since censored data often occur in applications, we study two types of censoring for their effects on the methods of estimation: Type II censoring and multiple random censoring. In this study we consider different sample sizes and several values of the true shape and scale parameters.

Comparisons are made in terms of bias and the mean squared error of the estimates. We propose that the LS method be generally preferred over the ML and MML methods for estimating the Pareto parameter γ for all sample sizes, all values of the parameter and for both complete and censored samples. In many cases, however, the ML estimates are comparable in their efficiency, so that either estimator can effectively be used. For estimating the parameter α, the LS method is also generally preferred for smaller values of the parameter (α ≤4). For the larger values of the parameter, and for censored samples, the MML method appears superior to the other methods with a slight advantage over the LS method. For larger values of the parameter α, for censored samples and all methods, underestimation can be a problem.  相似文献   

2.
Multi-layer perceptrons (MLPs), a common type of artificial neural networks (ANNs), are widely used in computer science and engineering for object recognition, discrimination and classification, and have more recently found use in process monitoring and control. Training such networks is not a straightforward optimisation problem, and we examine features of these networks which contribute to the optimisation difficulty.Although the original perceptron, developed in the late 1950s (Rosenblatt 1958, Widrow and Hoff 1960), had a binary output from each node, this was not compatible with back-propagation and similar training methods for the MLP. Hence the output of each node (and the final network output) was made a differentiable function of the network inputs. We reformulate the MLP model with the original perceptron in mind so that each node in the hidden layers can be considered as a latent (that is, unobserved) Bernoulli random variable. This maintains the property of binary output from the nodes, and with an imposed logistic regression of the hidden layer nodes on the inputs, the expected output of our model is identical to the MLP output with a logistic sigmoid activation function (for the case of one hidden layer).We examine the usual MLP objective function—the sum of squares—and show its multi-modal form and the corresponding optimisation difficulty. We also construct the likelihood for the reformulated latent variable model and maximise it by standard finite mixture ML methods using an EM algorithm, which provides stable ML estimates from random starting positions without the need for regularisation or cross-validation. Over-fitting of the number of nodes does not affect this stability. This algorithm is closely related to the EM algorithm of Jordan and Jacobs (1994) for the Mixture of Experts model.We conclude with some general comments on the relation between the MLP and latent variable models.  相似文献   

3.
Generalized Hyperbolic distribution (Barndorff-Nielsen 1977) is a variance-mean mixture of a normal distribution with the Generalized Inverse Gaussian distribution. Recently subclasses of these distributions (e.g., the hyperbolic distribution and the Normal Inverse Gaussian distribution) have been applied to construct stochastic processes in turbulence and particularly in finance, where multidimensional problems are of special interest. Parameter estimation for these distributions based on an i.i.d. sample is a difficult task even for a specified one-dimensional subclass (subclass being uniquely defined by ) and relies on numerical methods. For the hyperbolic subclass ( = 1), computer program hyp (Blæsild and Sørensen 1992) estimates parameters via ML when the dimensionality is less than or equal to three. To the best of the author's knowledge, no successful attempts have been made to fit any given subclass when the dimensionality is greater than three. This article proposes a simple EM-based (Dempster, Laird and Rubin 1977) ML estimation procedure to estimate parameters of the distribution when the subclass is known regardless of the dimensionality. Our method relies on the ability to numerically evaluate modified Bessel functions of the third kind and their logarithms, which is made possible by currently available software. The method is applied to fit the five dimensional Normal Inverse Gaussian distribution to a series of returns on foreign exchange rates.  相似文献   

4.
In some situations the asymptotic distribution of a random function T n() that depends on a nuisance parameter is tractable when has known value. In that case it can be used as a test statistic, if suitably constructed, for some hypothesis. However, in practice, often needs to be replaced by an estimator S n. In this paper general results are given concerning the asymptotic distribution of T n(S n) that include special cases previously dealt with. In particular, some situations are covered where the usual likelihood theory is nonregular and extreme values are employed to construct estimators and test statistics.  相似文献   

5.
Kernel density classification and boosting: an L2 analysis   总被引:1,自引:0,他引:1  
Kernel density estimation is a commonly used approach to classification. However, most of the theoretical results for kernel methods apply to estimation per se and not necessarily to classification. In this paper we show that when estimating the difference between two densities, the optimal smoothing parameters are increasing functions of the sample size of the complementary group, and we provide a small simluation study which examines the relative performance of kernel density methods when the final goal is classification.A relative newcomer to the classification portfolio is boosting, and this paper proposes an algorithm for boosting kernel density classifiers. We note that boosting is closely linked to a previously proposed method of bias reduction in kernel density estimation and indicate how it will enjoy similar properties for classification. We show that boosting kernel classifiers reduces the bias whilst only slightly increasing the variance, with an overall reduction in error. Numerical examples and simulations are used to illustrate the findings, and we also suggest further areas of research.  相似文献   

6.
CHU  HUI-MAY  KUO  LYNN 《Statistics and Computing》1997,7(3):183-192
Bayesian methods for estimating the dose response curves with the one-hit model, the gamma multi-hit model, and their modified versions with Abbott's correction are studied. The Gibbs sampling approach with data augmentation and with the Metropolis algorithm is employed to compute the Bayes estimates of the potency curves. In addition, estimation of the relative additional risk and the virtually safe dose is studied. Model selection based on conditional predictive ordinates from cross-validated data is developed.  相似文献   

7.
Over the last few years many studies have been carried out in Italy to identify reliable small area labour force indicators. Considering the rotated sample design of the Italian Labour Force Survey, the aim of this work is to derive a small area estimator which borrows strength from individual temporal correlation, as well as from related areas. Two small area estimators are derived as extensions of an estimation strategies proposed by Fuller (1990) for partial overlap samples. A simulation study is carried out to evaluate the gain in efficiency provided by our solutions. Results obtained for different levels of autocorrelation between repeated measurements on the same outcome and different population settings show that these estimators are always more reliable than the traditional composite one, and in some circumstances they are extremely advantageous.The present paper is financially supported by Murst-Cofin (2001) Lutilizzo di informazioni di tipo amministrativo nella stima per piccole aree e per sottoinsiemi della popolazione (National Coordinator Prof. Carlo Filippucci).  相似文献   

8.
Edgoose  T.  Allison  L. 《Statistics and Computing》1999,9(4):269-278
General purpose un-supervised classification programs have typically assumed independence between observations in the data they analyse. In this paper we report on an extension to the MML classifier Snob which enables the program to take advantage of some of the extra information implicit in ordered datasets (such as time-series). Specifically the data is modelled as if it were generated from a first order Markov process with as many states as there are classes of observation. The state of such a process at any point in the sequence determines the class from which the corresponding observation is generated. Such a model is commonly referred to as a Hidden Markov Model. The MML calculation for the expected length of a near optimal two-part message stating a specific model of this type and a dataset given this model is presented. Such an estimate enables us to fairly compare models which differ in the number of classes they specify which in turn can guide a robust un-supervised search of the model space. The new program, tSnob, is tested against both synthetic data and a large real world dataset and is found to make unbiased estimates of model parameters and to conduct an effective search of the extended model space.  相似文献   

9.
Consider a set of points in the plane with Gaussian perturbations about a regular mean configuration in which a Delaunay triangulation of the mean of the process is comprised of equilateral triangles of the same size. The points are labelled at random as black or white with variances of the perturbations possibly dependent on the colour. By investigating triangle subsets (with four sets of possible colour labels for the vertices) in detail we propose various test statistics based on a Procrustes shape analysis. A simulation study is carried out to investigate the relative merits and the adequacy of the approximations used in the distributional results, as well as a comparison with simulation methods based on nearest-neighbour distances. The methodology is applied to an investigation of regularity in human muscle fibre cross-sections.  相似文献   

10.
Let X, T, Y be random vectors such that the distribution of Y conditional on covariates partitioned into the vectors X = x and T = t is given by f(y; x, ), where = (, (t)). Here is a parameter vector and (t) is a smooth, real–valued function of t. The joint distribution of X and T is assumed to be independent of and . This semiparametric model is called conditionally parametric because the conditional distribution f(y; x, ) of Y given X = x, T = t is parameterized by a finite dimensional parameter = (, (t)). Severini and Wong (1992. Annals of Statistics 20: 1768–1802) show how to estimate and (·) using generalized profile likelihoods, and they also provide a review of the literature on generalized profile likelihoods. Under specified regularity conditions, they derive an asymptotically efficient estimator of and a uniformly consistent estimator of (·). The purpose of this paper is to provide a short tutorial for this method of estimation under a likelihood–based model, reviewing results from Stein (1956. Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, University of California Press, Berkeley, pp. 187–196), Severini (1987. Ph.D Thesis, The University of Chicago, Department of Statistics, Chicago, Illinois), and Severini and Wong (op. cit.).  相似文献   

11.
In studies of the fracture toughness of irradiated weld metal, specimens are subjected to an increasing load. The test on any one specimen might be terminated by choice or because the specimen ruptures. Prior to termination, ductile tearing might or might not have occurred. The situation is thus basically one of competing risks, with different types of termination, but there are additional features. The major purpose of statistical analysis is to estimate probabilities concerning the values of toughness and crack length. The analysis has been based on a model developed for the joint survivor function of these quantities.  相似文献   

12.
When simulating a dynamical system, the computation is actually of a spatially discretized system, because finite machine arithmetic replaces continuum state space. For chaotic dynamical systems, the discretized simulations often have collapsing effects, to a fixed point or to short cycles. Statistical properties of these phenomena can be modelled with random mappings with an absorbing centre. The model gives results which are very much in line with computational experiments. The effects are discussed with special reference to the family of mappings f (x)=1-|1-2x|,x [0,1],1,<,,<,. Computer experiments show close agreement with predictions of the model.  相似文献   

13.
The common approach to analyzing censored data utilizes competing risk models; a class of distribution is first chosen and then the sufficient statistics are identified! An operational Bayesian approach (Barlow 1993) for analyzing censored data would require a somewhat different methodology. In this approach, we first determine potentially observable parameters of interest. We then determine the data summaries (sufficient statistics) for these parameters. Tsai (1994) suggests that the observed sample frequency is sufficient for predicting the population frequency. Invariant probability measures (likelihoods), conditional on the parameters of interest, are then derived based on the principle of sufficiency and the principle of insufficient reason.Research partially supported by the Army Research Office (DAAL03-91-G-0046) grant to the University of California at Berkeley.  相似文献   

14.
The generalized odds-rate class of regression models for time to event data is indexed by a non-negative constant and assumes thatg(S(t|Z)) = (t) + Zwhere g(s) = log(-1(s-) for > 0, g0(s) = log(- log s), S(t|Z) is the survival function of the time to event for an individual with qx1 covariate vector Z, is a qx1 vector of unknown regression parameters, and (t) is some arbitrary increasing function of t. When =0, this model is equivalent to the proportional hazards model and when =1, this model reduces to the proportional odds model. In the presence of right censoring, we construct estimators for and exp((t)) and show that they are consistent and asymptotically normal. In addition, we show that the estimator for is semiparametric efficient in the sense that it attains the semiparametric variance bound.  相似文献   

15.
The maximum likelihood (ML) equations calculated from censored normal samples do not admit explicit solutions. A principle of modification is given and modified maximum likelihood (MML) equations, which admit explicit solutions, are defined. This approach makes it possible to tackle the hitherto unresolved problem of estimating and testing hypotheses about group-effects in one-way classification experimental designs based on Type I censored normal samples. The MML estimators of group-effects are obtained as explicit functions of sample observations and shown to be asymptotically identical with the ML estimators and hence BAN (best asymptotic normal) estimators. A statistic t is defined to test a linear contrast of group-effects and shown to be asymptotically normally distributed. A numerical example is presented which illustrates the procedure.  相似文献   

16.
A new area of research interest is the computation of exact confidence limits or intervals for a scalar parameter of interest from discrete data by inverting a hypothesis test based on a studentized test statistic. See, for example, Chan and Zhang (1999), Agresti and Min (2001) and Agresti (2003) who deal with a difference of binomial probabilities and Agresti and Min (2002) who deal with an odds ratio. However, neither (1) a detailed analysis of the computational issues involved nor (2) a reliable method of computation that deals effectively with these issues is currently available. In this paper we solve these two problems for a very broad class of discrete data models. We suppose that the distribution of the data is determined by (,) where is a nuisance parameter vector. We also consider six different studentized test statistics. Our contributions to (1) are as follows. We show that the P-value resulting from the hypothesis test, considered as a function of the null-hypothesized value of , has both jump and drop discontinuities. Numerical examples are used to demonstrate that these discontinuities lead to the failure of simple-minded approaches to the computation of the confidence limit or interval. We also provide a new method for efficiently computing the set of all possible locations of these discontinuities. Our contribution to (2) is to provide a new and reliable method of computing the confidence limit or interval, based on the knowledge of this set.  相似文献   

17.
It is well-known that multivariate curve estimation suffers from the curse of dimensionality. However, reasonable estimators are possible, even in several dimensions, under appropriate restrictions on the complexity of the curve. In the present paper we explore how much appropriate wavelet estimators can exploit a typical restriction on the curve such as additivity. We first propose an adaptive and simultaneous estimation procedure for all additive components in additive regression models and discuss rate of convergence results and data-dependent truncation rules for wavelet series estimators. To speed up computation we then introduce a wavelet version of functional ANOVA algorithm for additive regression models and propose a regularization algorithm which guarantees an adaptive solution to the multivariate estimation problem. Some simulations indicate that wavelets methods complement nicely the existing methodology for nonparametric multivariate curve estimation.  相似文献   

18.
The K principal points of a p-variate random variable X are defined as those points 1,..., K which minimize the expected squared distance of X from the nearest of the k . This paper reviews some of the theory of principal points and presents a method of determining principal points of univariate continuous distributions. The method is applied to the uniform distribution, to the normal distribution and to the exponential distribution.  相似文献   

19.
Jerome H. Friedman and Nicholas I. Fisher   总被引:1,自引:0,他引:1  
Many data analytic questions can be formulated as (noisy) optimization problems. They explicitly or implicitly involve finding simultaneous combinations of values for a set of (input) variables that imply unusually large (or small) values of another designated (output) variable. Specifically, one seeks a set of subregions of the input variable space within which the value of the output variable is considerably larger (or smaller) than its average value over the entire input domain. In addition it is usually desired that these regions be describable in an interpretable form involving simple statements (rules) concerning the input values. This paper presents a procedure directed towards this goal based on the notion of patient rule induction. This patient strategy is contrasted with the greedy ones used by most rule induction methods, and semi-greedy ones used by some partitioning tree techniques such as CART. Applications involving scientific and commercial data bases are presented.  相似文献   

20.
When constructing uniform random numbers in [0, 1] from the output of a physical device, usually n independent and unbiased bits B j are extracted and combined into the machine number . In order to reduce the number of data used to build one real number, we observe that for independent and exponentially distributed random variables X n (which arise for example as waiting times between two consecutive impulses of a Geiger counter) the variable U n : = X 2n – 1/(X 2n – 1 + X 2n ) is uniform in [0, 1]. In the practical application X n can only be measured up to a given precision (in terms of the expectation of the X n ); it is shown that the distribution function obtained by calculating U n from these measurements differs from the uniform by less than /2.We compare this deviation with the error resulting from the use of biased bits B j with P {B j = 1{ = (where ] – [) in the construction of Y above. The influence of a bias is given by the estimate that in the p-total variation norm Q TV p = ( |Q()| p )1/p (p 1) we have P Y P 0 Y TV p (c n · )1/p with c n p for n . For the distribution function F Y F 0 Y 2(1 – 2n )|| holds.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号