首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In some situations the asymptotic distribution of a random function T n() that depends on a nuisance parameter is tractable when has known value. In that case it can be used as a test statistic, if suitably constructed, for some hypothesis. However, in practice, often needs to be replaced by an estimator S n. In this paper general results are given concerning the asymptotic distribution of T n(S n) that include special cases previously dealt with. In particular, some situations are covered where the usual likelihood theory is nonregular and extreme values are employed to construct estimators and test statistics.  相似文献   

2.
The generalized odds-rate class of regression models for time to event data is indexed by a non-negative constant and assumes thatg(S(t|Z)) = (t) + Zwhere g(s) = log(-1(s-) for > 0, g0(s) = log(- log s), S(t|Z) is the survival function of the time to event for an individual with qx1 covariate vector Z, is a qx1 vector of unknown regression parameters, and (t) is some arbitrary increasing function of t. When =0, this model is equivalent to the proportional hazards model and when =1, this model reduces to the proportional odds model. In the presence of right censoring, we construct estimators for and exp((t)) and show that they are consistent and asymptotically normal. In addition, we show that the estimator for is semiparametric efficient in the sense that it attains the semiparametric variance bound.  相似文献   

3.
When constructing uniform random numbers in [0, 1] from the output of a physical device, usually n independent and unbiased bits B j are extracted and combined into the machine number . In order to reduce the number of data used to build one real number, we observe that for independent and exponentially distributed random variables X n (which arise for example as waiting times between two consecutive impulses of a Geiger counter) the variable U n : = X 2n – 1/(X 2n – 1 + X 2n ) is uniform in [0, 1]. In the practical application X n can only be measured up to a given precision (in terms of the expectation of the X n ); it is shown that the distribution function obtained by calculating U n from these measurements differs from the uniform by less than /2.We compare this deviation with the error resulting from the use of biased bits B j with P {B j = 1{ = (where ] – [) in the construction of Y above. The influence of a bias is given by the estimate that in the p-total variation norm Q TV p = ( |Q()| p )1/p (p 1) we have P Y P 0 Y TV p (c n · )1/p with c n p for n . For the distribution function F Y F 0 Y 2(1 – 2n )|| holds.  相似文献   

4.
When simulating a dynamical system, the computation is actually of a spatially discretized system, because finite machine arithmetic replaces continuum state space. For chaotic dynamical systems, the discretized simulations often have collapsing effects, to a fixed point or to short cycles. Statistical properties of these phenomena can be modelled with random mappings with an absorbing centre. The model gives results which are very much in line with computational experiments. The effects are discussed with special reference to the family of mappings f (x)=1-|1-2x|,x [0,1],1,<,,<,. Computer experiments show close agreement with predictions of the model.  相似文献   

5.
In Flury (1990) the k principal points of a random vector X are defned as the points p(1),..., p(k) minimizing EX–p(i)2; i=1,..., k. We extend this concept to that of k principal points with respect to a loss function L, and present an algorithm for their computation in the univariate case.  相似文献   

6.
The aim of the paper is to find the univariate stationary distribution of a particular bilinear process. In this context, we propose a novel approach to derive the distribution function. It is based on a recursive formula and allows to relax the conditions on the moments of the process. We also show that the derived approximation converges to the true distribution function. The accuracy of the recursive formula is evaluated for finite sample dimensions by a small simulation study.Received: February 2003, Revised: May 2004,  相似文献   

7.
A probabilistic expert system provides a graphical representation of a joint probability distribution which can be used to simplify and localize calculations. Jensenet al. (1990) introduced a flow-propagation algorithm for calculating marginal and conditional distributions in such a system. This paper analyses that algorithm in detail, and shows how it can be modified to perform other tasks, including maximization of the joint density and simultaneous fast retraction of evidence entered on several variables.  相似文献   

8.
A new area of research interest is the computation of exact confidence limits or intervals for a scalar parameter of interest from discrete data by inverting a hypothesis test based on a studentized test statistic. See, for example, Chan and Zhang (1999), Agresti and Min (2001) and Agresti (2003) who deal with a difference of binomial probabilities and Agresti and Min (2002) who deal with an odds ratio. However, neither (1) a detailed analysis of the computational issues involved nor (2) a reliable method of computation that deals effectively with these issues is currently available. In this paper we solve these two problems for a very broad class of discrete data models. We suppose that the distribution of the data is determined by (,) where is a nuisance parameter vector. We also consider six different studentized test statistics. Our contributions to (1) are as follows. We show that the P-value resulting from the hypothesis test, considered as a function of the null-hypothesized value of , has both jump and drop discontinuities. Numerical examples are used to demonstrate that these discontinuities lead to the failure of simple-minded approaches to the computation of the confidence limit or interval. We also provide a new method for efficiently computing the set of all possible locations of these discontinuities. Our contribution to (2) is to provide a new and reliable method of computing the confidence limit or interval, based on the knowledge of this set.  相似文献   

9.
Simple, closed form saddlepoint approximations for the distribution and density of the singly and doubly noncentral F distributions are presented. Their overwhelming accuracy is demonstrated numerically using a variety of parameter values. The approximations are shown to be uniform in the right tail and the associated limitating relative error is derived. Difficulties associated with some algorithms used for exact computation of the singly noncentral F are noted.  相似文献   

10.
Fitting Weibull duration models with random effects   总被引:1,自引:0,他引:1  
Duration time models often should include correlated failure times, due to clustered data. These random effects hierarchical models sometimes are called frailty models when used for survival analyses. The data analyzed here involve such correlations because patient level outcomes (the times until graft failure following kidney transplantation) are observed, but patients are clustered in different transplant centers. We describe fitting such models by combining two kinds of software, one for parametric survival regression models, and the other for doing Poisson regression in a hierarchical setting. The latter is implemented by using PRIMM (Poisson Regression and Interactive Multilevel Modeling) methods and software (Christiansen & Morris, 1994a). An illustrative example for profiling data is included withk=11 kidney transplant centers andN=412 patients.  相似文献   

11.
Over the last few years many studies have been carried out in Italy to identify reliable small area labour force indicators. Considering the rotated sample design of the Italian Labour Force Survey, the aim of this work is to derive a small area estimator which borrows strength from individual temporal correlation, as well as from related areas. Two small area estimators are derived as extensions of an estimation strategies proposed by Fuller (1990) for partial overlap samples. A simulation study is carried out to evaluate the gain in efficiency provided by our solutions. Results obtained for different levels of autocorrelation between repeated measurements on the same outcome and different population settings show that these estimators are always more reliable than the traditional composite one, and in some circumstances they are extremely advantageous.The present paper is financially supported by Murst-Cofin (2001) Lutilizzo di informazioni di tipo amministrativo nella stima per piccole aree e per sottoinsiemi della popolazione (National Coordinator Prof. Carlo Filippucci).  相似文献   

12.
Summary: We describe depth–based graphical displays that show the interdependence of multivariate distributions. The plots involve one–dimensional curves or bivariate scatterplots, so they are easier to interpret than correlation matrices. The correlation curve, modelled on the scale curve of Liu et al. (1999), compares the volume of the observed central regions with the volume under independence. The correlation DD–plot is the scatterplot of depth values under a reference distribution against depth values under independence. The area of the plot gives a measure of distance from independence. Correlation curve and DD-plot require an independence model as a baseline: Besides classical parametric specifications, a nonparametric estimator, derived from the randomization principle, is used. Combining data depth and the notion of quadrant dependence, quadrant correlation trajectories are obtained which allow simultaneous representation of subsets of variables. The properties of the plots for the multivariate normal distribution are investigated. Some real data examples are illustrated. *This work was completed with the support of Ca Foscari University.  相似文献   

13.
Evolution strategies (ESs) are a special class of probabilistic, direct, global optimization methods. They are similar to genetic algorithms but work in continuous spaces and have the additional capability of self-adapting their major strategy parameters. This paper presents the most important features of ESs, namely their self-adaptation, as well as their robustness and potential for parallelization which they share with other evolutionary algorithms.Besides the early (1 + 1)-ES and its underlying theoretical results, the modern ( + )-ES and (, )-ES are presented with special emphasis on the self-adaptation of strategy parameters, a mechanism which enables the algorithm to evolve not only the object variables but also the characteristics of the probability distributions of normally distributed mutations. The self-adaptation property of the algorithm is also illustrated by an experimental example.The robustness of ESs is demonstrated for noisy fitness evaluations and by its application to discrete optimization problems, namely the travelling salesman problem (TSP).Finally, the paper concludes by summarizing existing work and general possibilities regarding the parallelization of evolution strategies and evolutionary algorithms in general.  相似文献   

14.
In studies of the fracture toughness of irradiated weld metal, specimens are subjected to an increasing load. The test on any one specimen might be terminated by choice or because the specimen ruptures. Prior to termination, ductile tearing might or might not have occurred. The situation is thus basically one of competing risks, with different types of termination, but there are additional features. The major purpose of statistical analysis is to estimate probabilities concerning the values of toughness and crack length. The analysis has been based on a model developed for the joint survivor function of these quantities.  相似文献   

15.
Multi-layer perceptrons (MLPs), a common type of artificial neural networks (ANNs), are widely used in computer science and engineering for object recognition, discrimination and classification, and have more recently found use in process monitoring and control. Training such networks is not a straightforward optimisation problem, and we examine features of these networks which contribute to the optimisation difficulty.Although the original perceptron, developed in the late 1950s (Rosenblatt 1958, Widrow and Hoff 1960), had a binary output from each node, this was not compatible with back-propagation and similar training methods for the MLP. Hence the output of each node (and the final network output) was made a differentiable function of the network inputs. We reformulate the MLP model with the original perceptron in mind so that each node in the hidden layers can be considered as a latent (that is, unobserved) Bernoulli random variable. This maintains the property of binary output from the nodes, and with an imposed logistic regression of the hidden layer nodes on the inputs, the expected output of our model is identical to the MLP output with a logistic sigmoid activation function (for the case of one hidden layer).We examine the usual MLP objective function—the sum of squares—and show its multi-modal form and the corresponding optimisation difficulty. We also construct the likelihood for the reformulated latent variable model and maximise it by standard finite mixture ML methods using an EM algorithm, which provides stable ML estimates from random starting positions without the need for regularisation or cross-validation. Over-fitting of the number of nodes does not affect this stability. This algorithm is closely related to the EM algorithm of Jordan and Jacobs (1994) for the Mixture of Experts model.We conclude with some general comments on the relation between the MLP and latent variable models.  相似文献   

16.
17.
A traditional interpolation model is characterized by the choice of regularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant , and the noise model has a single parameter . The ratio / alone is responsible for determining globally all these attributes of the interpolant: its complexity, flexibility, smoothness, characteristic scale length, and characteristic amplitude. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of conditional convexity when designing models with many hyperparameters. We apply the new models to the interpolation of neuronal spike data and demonstrate a substantial improvement in generalization error.  相似文献   

18.
CHU  HUI-MAY  KUO  LYNN 《Statistics and Computing》1997,7(3):183-192
Bayesian methods for estimating the dose response curves with the one-hit model, the gamma multi-hit model, and their modified versions with Abbott's correction are studied. The Gibbs sampling approach with data augmentation and with the Metropolis algorithm is employed to compute the Bayes estimates of the potency curves. In addition, estimation of the relative additional risk and the virtually safe dose is studied. Model selection based on conditional predictive ordinates from cross-validated data is developed.  相似文献   

19.
Jerome H. Friedman and Nicholas I. Fisher   总被引:1,自引:0,他引:1  
Many data analytic questions can be formulated as (noisy) optimization problems. They explicitly or implicitly involve finding simultaneous combinations of values for a set of (input) variables that imply unusually large (or small) values of another designated (output) variable. Specifically, one seeks a set of subregions of the input variable space within which the value of the output variable is considerably larger (or smaller) than its average value over the entire input domain. In addition it is usually desired that these regions be describable in an interpretable form involving simple statements (rules) concerning the input values. This paper presents a procedure directed towards this goal based on the notion of patient rule induction. This patient strategy is contrasted with the greedy ones used by most rule induction methods, and semi-greedy ones used by some partitioning tree techniques such as CART. Applications involving scientific and commercial data bases are presented.  相似文献   

20.
In reliability and biometry, it is common practice to choose a failure model by first assessing the failure rate function subjectively, and then invoking the well known exponentiation formula. The derivation of this formula is based on the assumption that the underlying failure distribution be absolutely continuous. Thus, implicit in the above approach is the understanding that the selected failure distribution will be absolutely continuous. The purpose of this note is to point out that the absolute continuity may fail when the failure rate is assessed conditionally, and in particular when it is conditioned on certain types of covariates, called internal covariates. When such is the case, the exponentiation formula should not be used.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号