首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The family of lp-norm symmetric distributions was proposed by Yue and Ma and is a natural generalization to the family of l1-norm symmetric distributions studied by Fang et al. In this article, we propose a stochastic representation for the lp-norm symmetric distribution for any constant p > 0. The stochastic representation is expressed through independent and identically distributed uniform U(0, 1) random variables. It is illustrated that the stochastic representation can be applied to statistical simulation and uniform experimental design.  相似文献   

2.
In this paper it will be shown that the exponent p in Lp,-norm P estimation as an explicit function of the sample kurtosis is asymptotically normally distributed. The asymptotic variances of p for two sllch formulae are derived. An alternative formula which implicitly relates p to the sample kurtosis is also discussed.

An adaptive procedure for the selection of p when the underlying error distribution is unknown is also suggested. This procedure is used to verify empirically that the asymptotic distribution of p is normal.  相似文献   

3.
In this article, we consider the ARD(p)(1) process where D[0, 1] is the space of cadlag function and the pth derivative has a possible jump. One envisages to detect the position and intensity of jump in the context of p derivatives with continuous or discrete data. We also envisage jump for the (p + 1)th derivative. The main result allows to detect jump and to detect intensity of jump simultaneously. Asymptotic results are derived.  相似文献   

4.
In multiple linear regression analysis each lower-dimensional subspace L of a known linear subspace M of ? n corresponds to a non empty subset of the columns of the regressor matrix. For a fixed subspace L, the C p statistic is an unbiased estimator of the mean square error if the projection of the response vector onto L is used to estimate the expected response. In this article, we consider two truncated versions of the C p statistic that can also be used to estimate this mean square error. The C p statistic and its truncated versions are compared in two example data sets, illustrating that use of the truncated versions may result in models different from those selected by standard C p .  相似文献   

5.
P. Jagers 《Statistics》2013,47(4):455-464
For a suitable norm, conservation of the distance between expectation and hypothesis may furnish a basis for data reduction by invariance in the linear, not neces-sarily normal, model. If the norm is Euclidean (i.e. based on some inner product), the maximal invariant is a pair of sums of squares. This provides support for traditional χ2 (or F) - methods also in nonnormal cases. If the norm is lp p≠2, or the supnorm, the maximal invariant is, at the best a air of order statistics.  相似文献   

6.
Three estimators of the proportion in a tail of the normal distribution are compared using the criteria of mean squared error and mean absolute error. The estimators that we compare are the maximum likelihood estimator, the minimum variance unbiased estimator, and an intuitive estimator that is frequently used in practice. The intuitive estimator is similar to the MLE but uses the usual unbiased estimator of σ2 rather than the MLE of σ2. We show that the intuitive estimator has low efficiency, and for this reason it is not recommended. For very smallp and for largep the MVUE has the highest efficiency. The MLE is best for moderate values ofp.  相似文献   

7.
The least-absolute-deviation estimate of a monotone regression function on an interval has been studied in the literature. If the observation points become dense in the interval, the almost sure rate of convergence has been shown to be O(n1/4). Applying the techniques used by Brunk (1970, Nonparametric, Techniques in Statistical Inference. Cambridge Univ. Press), the asymptotic distribution of the l1 estimator at a point is obtained. If the underlying regression function has positive slope at the point, the rate of convergence is seen to be O(n1/3). Monotone percentile regression estimates are also considered.  相似文献   

8.
We consider statistical procedures for feature selection defined by a family of regularization problems with convex piecewise linear loss functions and penalties of l 1 nature. Many known statistical procedures (e.g. quantile regression and support vector machines with l 1-norm penalty) are subsumed under this category. Computationally, the regularization problems are linear programming (LP) problems indexed by a single parameter, which are known as ‘parametric cost LP’ or ‘parametric right-hand-side LP’ in the optimization theory. Exploiting the connection with the LP theory, we lay out general algorithms, namely, the simplex algorithm and its variant for generating regularized solution paths for the feature selection problems. The significance of such algorithms is that they allow a complete exploration of the model space along the paths and provide a broad view of persistent features in the data. The implications of the general path-finding algorithms are outlined for several statistical procedures, and they are illustrated with numerical examples.  相似文献   

9.
Conventional Phase II statistical process control (SPC) charts are designed using control limits; a chart gives a signal of process distributional shift when its charting statistic exceeds a properly chosen control limit. To do so, we only know whether a chart is out-of-control at a given time. It is therefore not informative enough about the likelihood of a potential distributional shift. In this paper, we suggest designing the SPC charts using p values. By this approach, at each time point of Phase II process monitoring, the p value of the observed charting statistic is computed, under the assumption that the process is in-control. If the p value is less than a pre-specified significance level, then a signal of distributional shift is delivered. This p value approach has several benefits, compared to the conventional design using control limits. First, after a signal of distributional shift is delivered, we could know how strong the signal is. Second, even when the p value at a given time point is larger than the significance level, it still provides us useful information about how stable the process performs at that time point. The second benefit is especially useful when we adopt a variable sampling scheme, by which the sampling time can be longer when we have more evidence that the process runs stably, supported by a larger p value. To demonstrate the p value approach, we consider univariate process monitoring by cumulative sum control charts in various cases.  相似文献   

10.
We develop a saddle-point approximation for the marginal density of a real-valued function p(), where is a general M-estimator of a p-dimensional parameter, that is, the solution of the system {n-1ljl (Yl,) = 0}j=1,…,p. The approximation is applied to several regression problems and yields very good accuracy for small samples. This enables us to compare different classes of estimators according to their finite-sample properties and to determine when asymptotic approximations are useful in practice.  相似文献   

11.
We consider an inhomogeneous Poisson process X on [0, T]. The intensity function of X is supposed to be strictly positive and smooth on [0, T] except at the point θ, in which it has either a 0-type singularity (tends to 0 like |x| p , p∈(0, 1)), or an ∞-type singularity (tends to ∞ like |x| p , p∈(?1, 0)). We suppose that we know the shape of the intensity function, but not the location of the singularity. We consider the problem of estimation of this location (shift) parameter θ based on n observations of the process X. We study the Bayesian estimators and, in the case p>0, the maximum-likelihood estimator. We show that these estimators are consistent, their rate of convergence is n 1/(p+1), they have different limit distributions, and the Bayesian estimators are asymptotically efficient.  相似文献   

12.
ABSTRACT

In this article, we consider the construction of minimum aberration 2n ? k: 2p designs with respect to some existing combined wordlength patterns, where a 2n ? k: 2p design is a blocked two-level design with n treatment factors, 2p blocks, and N = 2q runs with q = n ? k. Two methods are proposed for two situations: n ? 2q ? p ? 1 and n > N/2. These methods enable us to obtain some new minimum aberration 2n ? k: 2p designs from existing minimum aberration unblocked and blocked designs. Examples are included to illustrate the theory.  相似文献   

13.
Of the two most widely estimated univariate asymmetric conditional volatility models, the exponential GARCH (or EGARCH) specification is said to be able to capture asymmetry, which refers to the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which refers to the negative correlation between the returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-)maximum likelihood estimator (QMLE) of the EGARCH(p, q) parameters are not available under general conditions, but only for special cases under highly restrictive and unverifiable sufficient conditions, such as EGARCH(1,0) or EGARCH(1,1), and possibly only under simulation. A limitation in the development of asymptotic properties of the QMLE for the EGARCH(p, q) model is the lack of an invertibility condition for the returns shocks underlying the model. It is shown in this article that the EGARCH(p, q) model can be derived from a stochastic process, for which sufficient invertibility conditions can be stated simply and explicitly when the parameters respect a simple condition.11Using the notation introduced in part 2, this refers to the cases where α ≥ |γ| or α ≤ ? |γ|. The first inequality is generally assumed in the literature related to the invertibility of EGARCH. This article provides (in the Appendix) an argument for the possible lack of invertibility when these conditions are not met. This will be useful in reinterpreting the existing properties of the QMLE of the EGARCH(p, q) parameters.  相似文献   

14.
Large sample tests for the standard To bit model versus the p -Tobit model by Deaton and Irish (1984) are studied. The normalized one-tailed score test by Deaton and Irish (1984) is shown to be a version of Neyman's C(α) test that is valid for the non-standard problem of the null hypothesis lying on the boundary of the parameter space. Then, this paper reports the results of Monte Carlo experiments designed to study the small sample performance of large sample tests for the standard Tobit specification versus the p -Tobit specification.  相似文献   

15.
We consider the problem of parameter estimation for an inhomogeneous Poisson process observed on the time interval [0, τ]. We introduce the minimum L 1-norm estimator of the unknown parameter and study the asymptotical behaviors of the estimates when the number of observations increases. It is established that this estimator is consistent and we show that the corresponding differences converge to certain variables. These limit variables are asymptotically normal as τ tends to infinity.  相似文献   

16.
We derive rates of contraction of posterior distributions on non‐parametric models resulting from sieve priors. The aim of the study was to provide general conditions to get posterior rates when the parameter space has a general structure, and rate adaptation when the parameter is, for example, a Sobolev class. The conditions employed, although standard in the literature, are combined in a different way. The results are applied to density, regression, nonlinear autoregression and Gaussian white noise models. In the latter we have also considered a loss function which is different from the usual l 2 norm, namely the pointwise loss. In this case it is possible to prove that the adaptive Bayesian approach for the l 2 loss is strongly suboptimal and we provide a lower bound on the rate.  相似文献   

17.
Summary: L p –norm weighted depth functions are introduced and the local and global robustness of these weighted L p –depth functions and their induced multivariate medians are investigated via influence function and finite sample breakdown point. To study the global robustness of depth functions, a notion of finite sample breakdown point is introduced. The weighted L p –depth functions turn out to have the same low breakdown point as some other popular depth functions. Their influence functions are also unbounded. On the other hand, the weighted L p –depth induced medians are globally robust with the highest possible breakdown point for any reasonable estimator. The weighted L p –medians are also locally robust with bounded influence functions for suitable weight functions. Unlike other existing depth functions and multivariate medians, the weighted L p depth and medians are easy to calculate in high dimensions. The price for this advantage is the lack of affine invariance and equivariance of the weighted L p depth and medians, respectively.*The author thanks the referees for their very insightful and constructive comments and suggestions which led to corrections and substantial improvements. Supported in part by NSF Grants DMS-0071976 and DMS-0134628.  相似文献   

18.
We derive the ?1-limit of trimmed sums of order statistics from location-scale distributions satisfying certain assumptions. Based on this limit, an approximation to the asymptotic variance of a Best-Asymptotic-Normal (BAN) estimator for the location parameter is developed. Associated formulae are derived for four location-scale distributions commonly used in lifetime data analysis. The approximation is analyzed via the properties of the approximating function and by comparison to the exact values for a special case. Applications are illustrated by applying the approximation to comparing location parameters and to selecting the population with the largest location parameter, using censored samples from location-scale populations.  相似文献   

19.
The minimum aberration criterion has been advocated for ranking foldovers of 2k−p2kp fractional factorial designs (Li and Lin, 2003); however, a minimum aberration design may not maximize the number of clear low-order effects. We propose using foldover plans that sequentially maximize the number of clear low-order effects in the combined (initial plus foldover) design and investigate the extent to which these foldover plans differ from those that are optimal under the minimum aberration criterion. A small catalog is provided to summarize the results.  相似文献   

20.
We propose a shrinkage procedure for simultaneous variable selection and estimation in generalized linear models (GLMs) with an explicit predictive motivation. The procedure estimates the coefficients by minimizing the Kullback-Leibler divergence of a set of predictive distributions to the corresponding predictive distributions for the full model, subject to an l 1 constraint on the coefficient vector. This results in selection of a parsimonious model with similar predictive performance to the full model. Thanks to its similar form to the original Lasso problem for GLMs, our procedure can benefit from available l 1-regularization path algorithms. Simulation studies and real data examples confirm the efficiency of our method in terms of predictive performance on future observations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号