首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Existing estimators of a finite population distribution function that utilize auxiliary information are often constructed by a point wise argument. As a result, these estimators are not always monotone. We adopt a functional approach to the problem and propose two estimators based on compositions of functions. Asymptotic variance formulae are derived for the proposed es-timators. Comparisons are made with existing estimators in a simulation study using three natural populations.  相似文献   

2.
The author considers the estimation of the common probability density of independent and identically distributed random variables observed with added white noise. She assumes that the unknown density belongs to some class of supersmooth functions, and that the error distribution is ordinarily smooth, meaning that its characteristic function decays polynomially asymptotically. In this context, the author evaluates the minimax rate of convergence of the pointwise risk and describes a kernel estimator having this rate. She computes upper bounds for the L2 risk of this estimator.  相似文献   

3.
4.
An approach to non-linear principal components using radially symmetric kernel basis functions is described. The procedure consists of two steps: a projection of the data set to a reduced dimension using a non-linear transformation whose parameters are determined by the solution of a generalized symmetric eigenvector equation. This is achieved by demanding a maximum variance transformation subject to a normalization condition (Hotelling's approach) and can be related to the homogeneity analysis approach of Gifi through the minimization of a loss function. The transformed variables are the principal components whose values define contours, or more generally hypersurfaces, in the data space. The second stage of the procedure defines the fitting surface, the principal surface, in the data space (again as a weighted sum of kernel basis functions) using the definition of self-consistency of Hastie and Stuetzle. The parameters of this principal surface are determined by a singular value decomposition and crossvalidation is used to obtain the kernel bandwidths. The approach is assessed on four data sets.  相似文献   

5.
Bernstein polynomials have many interesting properties. In statistics, they were mainly used to estimate density functions and regression relationships. The main objective of this paper is to promote further use of Bernstein polynomials in statistics. This includes (1) providing a high-level approximation of the moments of a continuous function g(X) of a random variable X, and (2) proving Jensen’s inequality concerning a convex function without requiring second differentiability of the function. The approximation in (1) is demonstrated to be quite superior to the delta method, which is used to approximate the variance of g(X) with the added assumption of differentiability of the function. Two numerical examples are given to illustrate the application of the proposed methodology in (1).  相似文献   

6.
Saddlepoint approximations for the densities and the distribution functions of the ratio of two linear functions of gamma random variables and the product of gamma random variables are derived. Ratios of linear functions with positive and negative weights and non identical gamma variables are considered. The saddlepoint approximations are very accurate in the tails as in the center of the distribution. Extensive simulation studies are used to evaluate the accuracy of the proposed methods.  相似文献   

7.
8.
A family of distributions labelled as Poisson v Katz is formulated, which includes, as particular or limiting cases, the Negative Binomial, Neyman Type A, Poisson v Pascal, and Poisson v Binomial. Thus, while analyzing data, estimating the parameters in the Poisson v Katz family obviates the necessity of having to choose from among the particular or limiting cases. In this article minimum chi-square estimators are presented and their asymptotic relative efficiency obtained. An example is presented to illustrate the procedure  相似文献   

9.
The authors propose a novel class of cure rate models for right‐censored failure time data. The class is formulated through a transformation on the unknown population survival function. It includes the mixture cure model and the promotion time cure model as two special cases. The authors propose a general form of the covariate structure which automatically satisfies an inherent parameter constraint and includes the corresponding binomial and exponential covariate structures in the two main formulations of cure models. The proposed class provides a natural link between the mixture and the promotion time cure models, and it offers a wide variety of new modelling structures as well. Within the Bayesian paradigm, a Markov chain Monte Carlo computational scheme is implemented for sampling from the full conditional distributions of the parameters. Model selection is based on the conditional predictive ordinate criterion. The use of the new class of models is illustrated with a set of real data involving a melanoma clinical trial.  相似文献   

10.
Summary Letg(x) andf(x) be continuous density function on (a, b) and let {ϕj} be a complete orthonormal sequence of functions onL 2(g), which is the set of squared integrable functions weighted byg on (a, b). Suppose that over (a, b). Given a grouped sample of sizen fromf(x), the paper investigates the asymptotic properties of the restricted maximum likelihood estimator of density, obtained by setting all but the firstm of the ϑj’s equal to0. Practical suggestions are given for performing estimation via the use of Fourier and Legendre polynomial series. Research partially supported by: CNR grant, n. 93. 00837. CT10.  相似文献   

11.
A F0RTRAN-77 subroutine for a general version of multi-response permutation procedures (MRPP) is described. The exact four moments are employed in conjunction with the Pearson type I, type III, and type VI distributions to calculate the associated P-values.  相似文献   

12.
The author introduces robust techniques for estimation, inference and variable selection in the analysis of longitudinal data. She first addresses the problem of the robust estimation of the regression and nuisance parameters, for which she derives the asymptotic distribution. She uses weighted estimating equations to build robust quasi‐likelihood functions. These functions are then used to construct a class of test statistics for variable selection. She derives the limiting distribution of these tests and shows its robustness properties in terms of stability of the asymptotic level and power under contamination. An application to a real data set allows her to illustrate the benefits of a robust analysis.  相似文献   

13.
Joshi (1973) and Balakrishnan and Malik (1985) have derived some some identities for the moments of order statistics from independent and identically distributed random variables. In this paper, we make use of a basic result due to David and Joshi (1968) and show that these identities for the moments also hold when the order statistics arise from exchangeable variables.  相似文献   

14.
This paper studies bandwidth selection for kernel estimation of derivatives of multidimensional conditional densities, a non-parametric realm unexplored in the literature. This paper extends Baird [Cross validation bandwidth selection for derivatives of multidimensional densities. RAND Working Paper series, WR-1060; 2014] in its examination of conditional multivariate densities, derives and presents criteria for arbitrary kernel order and density dimension, shows consistency of the estimators, and investigates a minimization criterion which jointly estimates numerator and denominator bandwidths. I conduct a Monte Carlo simulation study for various orders of kernels in the Gaussian family and compare the new cross validation criterion with those implied by Baird [Cross validation bandwidth selection for derivatives of multidimensional densities. RAND Working Paper series, WR-1060; 2014]. The paper finds that higher order kernels become increasingly important as the dimension of the distribution increases. I find that the cross validation criterion developed in this paper that jointly estimates the derivative of the joint density (numerator) and the marginal density (denominator) does orders of magnitude better than criteria that estimate the bandwidths separately. I further find that using the infinite order Dirichlet kernel tends to have the best results.  相似文献   

15.
In this paper, we obtain a generalized moment identity for the case when the distributions of the random variables are not necessarily purely discrete or absolutely continuous. The proposed identity is useful to find the generator which has been used for the approximation of distributions by Stein's method. Apparently, a new approach is discussed for the approximation of distributions by Stein's method. We bring the characterization based on the relationship between conditional expectations and hazard measure in our unified framework. As an application, a new lower bound to the mean-squared error is obtained and it is compared with Bayesian Cramer–Rao bound.  相似文献   

16.
We consider situations where subjects in a longitudinal study experience recurrent events. However, the events are observed only in the form of counts for intervals which can vary across subjects. Methods for estimating the mean and rate functions of the recurrent-event processes are presented, based on loglinear regression models which incorporate piecewise-constant baseline rate functions. Robust methods and methods based on mixed Poisson processes are compared in a simulation study and in an example involving superficial bladder tumours in humans. Both approaches provide a simple and effective way to deal with interval-grouped data.  相似文献   

17.
A Box-Cox transformed linear model usually has the form y(λ) = μ + β1x1 +… + βpxp + oe, where y(λ) is the power transform of y. Although widely used in practice, the Fisher information matrix for the unknown parameters and, in particular, its inverse have not been studied seriously in the literature. We obtain those two important matrices to put the Box-Cox transformed linear model on a firmer ground. The question of how to make inference on β = (β1,…,βp)T when λ; is estimated from the data is then discussed for large but finite sample size by studying some parameter-based asymptotics. Both unconditional and conditional inference are studied from the frequentist point of view.  相似文献   

18.
ABSTRACT

The Lindley distribution is an important distribution for analysing the stress–strength reliability models and lifetime data. In many ways, the Lindley distribution is a better model than that based on the exponential distribution. Order statistics arise naturally in many of such applications. In this paper, we derive the exact explicit expressions for the single, double (product), triple and quadruple moments of order statistics from the Lindley distribution. Then, we use these moments to obtain the best linear unbiased estimates (BLUEs) of the location and scale parameters based on Type-II right-censored samples. Next, we use these results to determine the mean, variance, and coefficients of skewness and kurtosis of some certain linear functions of order statistics to develop Edgeworth approximate confidence intervals of the location and scale Lindley parameters. In addition, we carry out some numerical illustrations through Monte Carlo simulations to show the usefulness of the findings. Finally, we apply the findings of the paper to some real data set.  相似文献   

19.
The problem of minimum variance unbiased estimation of the probability density function of a random variable belonging to an exponential family is considered. The method of estimation proposed in this paper requires the solution of a certain integral equation. For many probability distributions the solution of this equation is given by a known result in integral transform theory.  相似文献   

20.
The authors consider the problem of simultaneous transformation and variable selection for linear regression. They propose a fully Bayesian solution to the problem, which allows averaging over all models considered including transformations of the response and predictors. The authors use the Box‐Cox family of transformations to transform the response and each predictor. To deal with the change of scale induced by the transformations, the authors propose to focus on new quantities rather than the estimated regression coefficients. These quantities, referred to as generalized regression coefficients, have a similar interpretation to the usual regression coefficients on the original scale of the data, but do not depend on the transformations. This allows probabilistic statements about the size of the effect associated with each variable, on the original scale of the data. In addition to variable and transformation selection, there is also uncertainty involved in the identification of outliers in regression. Thus, the authors also propose a more robust model to account for such outliers based on a t‐distribution with unknown degrees of freedom. Parameter estimation is carried out using an efficient Markov chain Monte Carlo algorithm, which permits moves around the space of all possible models. Using three real data sets and a simulated study, the authors show that there is considerable uncertainty about variable selection, choice of transformation, and outlier identification, and that there is advantage in dealing with all three simultaneously. The Canadian Journal of Statistics 37: 361–380; 2009 © 2009 Statistical Society of Canada  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号