首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A new procedure is proposed to estimate the jump location curve and surface in the two-dimensional (2D) and three-dimensional (3D) nonparametric jump regression models, respectively. In each of the 2D and 3D cases, our estimation procedure is motivated by the fact that, under some regularity conditions, the ridge location of the rotational difference kernel estimate (RDKE; Qiu in Sankhyā Ser. A 59, 268–294, 1997, and J. Comput. Graph. Stat. 11, 799–822, 2002; Garlipp and Müller in Sankhyā Ser. A 69, 55–86, 2007) obtained from the noisy image is asymptotically close to the jump location of the true image. Accordingly, a computational procedure based on the kernel smoothing method is designed to find the ridge location of RDKE, and the result is taken as the jump location estimate. The sequence relationship among the points comprising our jump location estimate is obtained. Our jump location estimate is produced without the knowledge of the range or shape of jump region. Simulation results demonstrate that the proposed estimation procedure can detect the jump location very well, and thus it is a useful alternative for estimating the jump location in each of the 2D and 3D cases.  相似文献   

2.
A. R. Soltani  H. Homei 《Statistics》2013,47(6):611-620
A new rich class of generalized two-sided power (TSP) distributions, where their density functions are expressed in terms of the Gauss hypergeometric functions, is introduced and studied. In this class, the symmetric distributions are supported by finite intervals and have normal shape densities. Our study on TSP distributions also leads us to a new class of discrete distributions on {0, 1, …, k}. In addition, a new numerical method for parameter estimation using moments is given.  相似文献   

3.
Kumar and Patel (1971) have considered the problem of testing the equality of location parameters of two exponential distributions on the basis of samples censored from above, when the scale parameters are the same and unknown. The test proposed by them is shown to be biased for n1n2, while for n1=n2 the test possesses the property of monotonicity and is equivalent to the likelihood ratio test, which is considered by Epstein and Tsao (1953) and Dubey (1963a, 1963b). Epstein and Tsao state that the test is unbiased. We may note that when the scale parameters of k exponential distributions are unknown the problem of testing the equality of location parameters is reducible to that of testing the equality of parameters in k rectangular populations for which a test and its power function were given by Khatri (1960, 1965); Jaiswal (1969) considered similar problems in his thesis. Here we extend the problem of testing the equality of k exponential distributions on the basis of samples censored from above when the scale parameters are equal and unknown, and we establish the likelihood ratio test (LET) and the union-intersection test (UIT) procedures. Using the results previously derived by Jaiswal (1969), we obtain the power function for the LET and for k= 2 show that the test possesses the property of monotonicity. The power function of the UIT is also given.  相似文献   

4.
Abstract

In this work, we propose beta prime kernel estimator for estimation of a probability density functions defined with nonnegative support. For the proposed estimator, beta prime probability density function used as a kernel. It is free of boundary bias and nonnegative with a natural varying shape. We obtained the optimal rate of convergence for the mean squared error (MSE) and the mean integrated squared error (MISE). Also, we use adaptive Bayesian bandwidth selection method with Lindley approximation for heavy tailed distributions and compare its performance with the global least squares cross-validation bandwidth selection method. Simulation studies are performed to evaluate the average integrated squared error (ISE) of the proposed kernel estimator against some asymmetric competitors using Monte Carlo simulations. Moreover, real data sets are presented to illustrate the findings.  相似文献   

5.
Abstract. A common practice in obtaining an efficient semiparametric estimate is through iteratively maximizing the (penalized) full log‐likelihood w.r.t. its Euclidean parameter and functional nuisance parameter. A rigorous theoretical study of this semiparametric iterative estimation approach is the main purpose of this study. We first show that the grid search algorithm produces an initial estimate with the proper convergence rate. Our second contribution is to provide a formula in calculating the minimal number of iterations k * needed to produce an efficient estimate . We discover that (i) k * depends on the convergence rates of the initial estimate and the nuisance functional estimate, and (ii) k * iterations are also sufficient for recovering the estimation sparsity in high dimensional data. The last contribution is the novel construction of which does not require knowing the explicit expression of the efficient score function. The above general conclusions apply to semiparametric models estimated under various regularizations, for example, kernel or penalized estimation. As far as we are aware, this study provides a first general theoretical justification for the ‘one‐/two‐step iteration’ phenomena observed in the semiparametric literature.  相似文献   

6.
We propose a nonparametric method, called rank-based empirical likelihood (REL), for making inferences on medians and cumulative distribution functions (CDFs) of k populations. The standard distribution-free approach to testing the equality of k medians requires that the k population distributions have the same shape. Our REL-ratio (RELR) test for this problem requires fewer assumptions and can effectively use the symmetry information when the distributions are symmetric. Furthermore, our RELR statistic does not require estimation of variance, and achieves asymptotic pivotalness implicitly. When the k populations have equal medians we show that the REL method produces valid inferences for the common median and CDFs of k populations. Simulation results show that the REL approach works remarkably well in finite samples. A real data example is used to illustrate the proposed REL method.  相似文献   

7.
Statistical control charts are often used in industry to monitor processes in the interests of quality improvement. Such charts assume independence and normality of the control statistic, but these assumptions are often violated in practice. To better capture the true shape of the underlying distribution of the control statistic, we utilize the g-and-k distributions to estimate probability limits, the true ARL, and the error in confidence that arises from incorrectly assuming normality. A sensitivity assessment reveals that the extent of error in confidence associated with control chart decision-making procedures increases more rapidly as the distribution becomes more skewed or as the tails of the distribution become longer than those of the normal distribution. These methods are illustrated using both a frequentist and computational Bayesian approach to estimate the g-and-k parameters in two different practical applications. The Bayesian approach is appealing because it can account for prior knowledge in the estimation procedure and yields posterior distributions of parameters of interest such as control limits.  相似文献   

8.
Among k independent two-parameter exponential distributions which have the common scale parameter, the lower extreme population (LEP) is the one with the smallest location parameter and the upper extreme population (UEP) is the one with the largest location parameter. Given a multiply type II censored sample from each of these k independent two-parameter exponential distributions, 14 estimators for the unknown location parameters and the common unknown scale parameter are considered. Fourteen simultaneous confidence intervals (SCIs) for all distances from the extreme populations (UEP and LEP) and from the UEP from these k independent exponential distributions under the multiply type II censoring are proposed. The critical values are obtained by the Monte Carlo method. The optimal SCIs among 14 methods are identified based on the criteria of minimum confidence length for various censoring schemes. The subset selection procedures of extreme populations are also proposed and two numerical examples are given for illustration.  相似文献   

9.
Abstract

Based on the Gamma kernel density estimation procedure, this article constructs a nonparametric kernel estimate for the regression functions when the covariate are nonnegative. Asymptotic normality and uniform almost sure convergence results for the new estimator are systematically studied, and the finite performance of the proposed estimate is discussed via a simulation study and a comparison study with an existing method. Finally, the proposed estimation procedure is applied to the Geyser data set.  相似文献   

10.
We show that sup, completely as, where f is a uniformly continuous density on are independent random vectors with common density f, and fn is the variable kernel estimate Here Hni is the distance between Xi and its kth nearest neighbour, K is a given density satisfying some regularity conditions, and k is a sequence of integers with the property that log asn  相似文献   

11.
Extended Weibull type distribution and finite mixture of distributions   总被引:1,自引:0,他引:1  
An extended form of Weibull distribution is suggested which has two shape parameters (m and δ). Introduction of another shape parameter δ helps to express the extended Weibull distribution not only as an exact form of a mixture of distributions under certain conditions, but also provides extra flexibility to the density function over positive range. The shape of density function of the extended Weibull type distribution for various values of the parameters is shown which may be of some interest to Bayesians. Certain statistical properties such as hazard rate function, mean residual function, rth moment are defined explicitly. The proposed extended Weibull distribution is used to derive an exact form of two, three and k-component mixture of distributions. With the help of a real data set, the usefulness of mixture Weibull type distribution is illustrated by using Markov Chain Monte Carlo (MCMC), Gibbs sampling approach.  相似文献   

12.
Liu and Singh (1993, 2006) introduced a depth‐based d‐variate extension of the nonparametric two sample scale test of Siegel and Tukey (1960). Liu and Singh (2006) generalized this depth‐based test for scale homogeneity of k ≥ 2 multivariate populations. Motivated by the work of Gastwirth (1965), we propose k sample percentile modifications of Liu and Singh's proposals. The test statistic is shown to be asymptotically normal when k = 2, and compares favorably with Liu and Singh (2006) if the underlying distributions are either symmetric with light tails or asymmetric. In the case of skewed distributions considered in this paper the power of the proposed tests can attain twice the power of the Liu‐Singh test for d ≥ 1. Finally, in the k‐sample case, it is shown that the asymptotic distribution of the proposed percentile modified Kruskal‐Wallis type test is χ2 with k ? 1 degrees of freedom. Power properties of this k‐sample test are similar to those for the proposed two sample one. The Canadian Journal of Statistics 39: 356–369; 2011 © 2011 Statistical Society of Canada  相似文献   

13.
This article is concerned with testing multiple hypotheses, one for each of a large number of small data sets. Such data are sometimes referred to as high-dimensional, low-sample size data. Our model assumes that each observation within a randomly selected small data set follows a mixture of C shifted and rescaled versions of an arbitrary density f. A novel kernel density estimation scheme, in conjunction with clustering methods, is applied to estimate f. Bayes information criterion and a new criterion weighted mean of within-cluster variances are used to estimate C, which is the number of mixture components or clusters. These results are applied to the multiple testing problem. The null sampling distribution of each test statistic is determined by f, and hence a bootstrap procedure that resamples from an estimate of f is used to approximate this null distribution.  相似文献   

14.
In this paper, we consider the family of skew generalized t (SGT) distributions originally introduced by Theodossiou [P. Theodossiou, Financial data and the skewed generalized t distribution, Manage. Sci. Part 1 44 (12) ( 1998), pp. 1650–1661] as a skew extension of the generalized t (GT) distribution. The SGT distribution family warrants special attention, because it encompasses distributions having both heavy tails and skewness, and many of the widely used distributions such as Student's t, normal, Hansen's skew t, exponential power, and skew exponential power (SEP) distributions are included as limiting or special cases in the SGT family. We show that the SGT distribution can be obtained as the scale mixture of the SEP and generalized gamma distributions. We investigate several properties of the SGT distribution and consider the maximum likelihood estimation of the location, scale, and skewness parameters under the assumption that the shape parameters are known. We show that if the shape parameters are estimated along with the location, scale, and skewness parameters, the influence function for the maximum likelihood estimators becomes unbounded. We obtain the necessary conditions to ensure the uniqueness of the maximum likelihood estimators for the location, scale, and skewness parameters, with known shape parameters. We provide a simple iterative re-weighting algorithm to compute the maximum likelihood estimates for the location, scale, and skewness parameters and show that this simple algorithm can be identified as an EM-type algorithm. We finally present two applications of the SGT distributions in robust estimation.  相似文献   

15.
As a flexible alternative to the Cox model, the accelerated failure time (AFT) model assumes that the event time of interest depends on the covariates through a regression function. The AFT model with non‐parametric covariate effects is investigated, when variable selection is desired along with estimation. Formulated in the framework of the smoothing spline analysis of variance model, the proposed method based on the Stute estimate ( Stute, 1993 [Consistent estimation under random censorship when covariables are present, J. Multivariate Anal. 45 , 89–103]) can achieve a sparse representation of the functional decomposition, by utilizing a reproducing kernel Hilbert norm penalty. Computational algorithms and theoretical properties of the proposed method are investigated. The finite sample size performance of the proposed approach is assessed via simulation studies. The primary biliary cirrhosis data is analyzed for demonstration.  相似文献   

16.
In order to explore and compare a finite number T of data sets by applying functional principal component analysis (FPCA) to the T associated probability density functions, we estimate these density functions by using the multivariate kernel method. The data set sizes being fixed, we study the behaviour of this FPCA under the assumption that all the bandwidth matrices used in the estimation of densities are proportional to a common parameter h and proportional to either the variance matrices or the identity matrix. In this context, we propose a selection criterion of the parameter h which depends only on the data and the FPCA method. Then, on simulated examples, we compare the quality of approximation of the FPCA when the bandwidth matrices are selected using either the previous criterion or two other classical bandwidth selection methods, that is, a plug-in or a cross-validation method.  相似文献   

17.
In this article, we focus on the general k-step step-stress accelerated life tests with Type-I censoring for two-parameter Weibull distributions based on the tampered failure rate (TFR) model. We get the optimum design for the tests under the criterion of the minimization of the asymptotic variance of the maximum likelihood estimate of the pth percentile of the lifetime under the normal operating conditions. Optimum test plans for the simple step-stress accelerated life tests under Type-I censoring are developed for the Weibull distribution and the exponential distribution in particular. Finally, an example is provided to illustrate the proposed design and a sensitivity analysis is conducted to investigate the robustness of the design.  相似文献   

18.
19.
We consider a test for the equality of k population medians, θi i=1,2,….,k, when it is believed a priori, that θ i: The observations are subject to right censorhip. The distributions of the censoring variables for each population are assumed to be equal. This test is compared with the general k-sample test proposed by Breslow  相似文献   

20.
The two parameter Gamma distribution is widely used for modeling lifetime distributions in reliability theory. There is much literature on the inference on the individual parameters of the Gamma distribution, namely the shape parameter k and the scale parameter θ when the other parameter is known. However, usually the reliability professionals have a major interest in making statistical inference about the mean lifetime μ, which equals the product θk for the Gamma distribution. The problem of inference on the mean μ when both parameters θ and k are unknown has been less attended in the literature for the Gamma distribution. In this paper we review the existing methods for interval estimation of μ. A comparative study in this paper indicates that the existing methods are either too approximate and yield less reliable confidence intervals or are computationally quite complicated and need advanced computing facilities. We propose a new simple method for interval estimation of the Gamma mean and compare its performance with the existing methods. The comparative study showed that the newly proposed computationally simple optimum power normal approximation method works best even for small sample sizes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号