首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In many applications, the parameters of interest are estimated by solving non‐smooth estimating functions with U‐statistic structure. Because the asymptotic covariances matrix of the estimator generally involves the underlying density function, resampling methods are often used to bypass the difficulty of non‐parametric density estimation. Despite its simplicity, the resultant‐covariance matrix estimator depends on the nature of resampling, and the method can be time‐consuming when the number of replications is large. Furthermore, the inferences are based on the normal approximation that may not be accurate for practical sample sizes. In this paper, we propose a jackknife empirical likelihood‐based inferential procedure for non‐smooth estimating functions. Standard chi‐square distributions are used to calculate the p‐value and to construct confidence intervals. Extensive simulation studies and two real examples are provided to illustrate its practical utilities.  相似文献   

2.
The Hinde–Demétrio (HD) family of distributions, which are discrete exponential dispersion models with an additional real index parameter p, have been recently characterized from the unit variance function μ + μ p . For p equals to 2, 3,…, the corresponding distributions are concentrated on non negative integers, overdispersed and zero-inflated with respect to a Poisson distribution having the same mean. The negative binomial (p = 2) and strict arcsine (p = 3) distributions are HD families; the limit case (p → ∞) is associated to a suitable Poisson distribution. Apart from these count distributions, none of the HD distributions has explicit probability mass functions p k . This article shows that the ratios r k  = k p k /p k?1, k = 1,…, p ? 1, are equal and different from r p . This new property allows, for a given count data set, to determine the integer p by some tests. The extreme situation of p = 2 is of general interest for count data. Some examples are used for illustrations and discussions.  相似文献   

3.
In their recent work, Jiang and Yang studied six classical Likelihood Ratio Test statistics under high‐dimensional setting. Assuming that a random sample of size n is observed from a p‐dimensional normal population, they derive the central limit theorems (CLTs) when p and n are proportional to each other, which are different from the classical chi‐square limits as n goes to infinity, while p remains fixed. In this paper, by developing a new tool, we prove that the mentioned six CLTs hold in a more applicable setting: p goes to infinity, and p can be very close to n. This is an almost sufficient and necessary condition for the CLTs. Simulations of histograms, comparisons on sizes and powers with those in the classical chi‐square approximations and discussions are presented afterwards.  相似文献   

4.
Abstract. A test for two‐sided equivalence of means has been developed under the assumption of normally distributed populations with heterogeneous variances. Its rejection region is limited by functions ± h that depend on the empirical variances. h is stated implicitly by a partial differential equation, an exact solution of which would provide a test that is exactly similar at the boundary of the null hypothesis of non‐equivalence. h is approximated by Taylor series up to third powers in the reciprocal number of degrees of freedom. This suffices to obtain error probabilities of the first kind that are very close to a nominal level of α = 0 . 05 at the boundary of the null hypothesis. For more than 10 data points in each group, they range between 0.04995 and 0.05005, and are thus much more precise than those obtained by other authors.  相似文献   

5.
Markov-switching (MS) models are becoming increasingly popular as efficient tools of modeling various phenomena in different disciplines, in particular for non Gaussian time series. In this articlept", we propose a broad class of Markov-switching BILINEARGARCH processes (MS ? BLGARCH hereafter) obtained by adding to a MS ? GARCH model one or more interaction components between the observed series and its volatility process. This parameterization offers remarkably rich dynamics and complex behavior for modeling and forecasting financial time-series data which exhibit structural changes. In these models, the parameters of conditional variance are allowed to vary according to some latent time-homogeneous Markov chain with finite state space or “regimes.” The main aim of this new model is to capture asymmetric and hence purported to be able to capture leverage effect characterized by the negativity of the correlation between returns shocks and subsequent shocks in volatility patterns in different regimes. So, first, some basic structural properties of this new model including sufficient conditions ensuring the existence of stationary, causal, ergodic solutions, and moments properties are given. Second, since the second-order structure provides a useful information to identify an appropriate time-series model, we derive the expression of the covariance function of for MS ? BLGARCH and for its powers. As a consequence, we find that the second (resp. higher)-order structure is similar to some linear processes, and hence MS ? BLGARCH (resp. its powers) admit an ARMA representation. This finding allows us for parameter estimation via GMM procedure proved by a Monte Carlo study and applied to foreign exchange rate of the Algerian Dinar against the single European currency.  相似文献   

6.
Abstract. In this article, we propose a new parametric family of models for real‐valued spatio‐temporal stochastic processes S ( x , t ) and show how low‐rank approximations can be used to overcome the computational problems that arise in fitting the proposed class of models to large datasets. Separable covariance models, in which the spatio‐temporal covariance function of S ( x , t ) factorizes into a product of purely spatial and purely temporal functions, are often used as a convenient working assumption but are too inflexible to cover the range of covariance structures encountered in applications. We define positive and negative non‐separability and show that in our proposed family we can capture positive, zero and negative non‐separability by varying the value of a single parameter.  相似文献   

7.
Abstract. A non‐parametric rank‐based test of exchangeability for bivariate extreme‐value copulas is first proposed. The two key ingredients of the suggested approach are the non‐parametric rank‐based estimators of the Pickands dependence function recently studied by Genest and Segers, and a multiplier technique for obtaining approximate p‐values for the derived statistics. The proposed approach is then extended to left‐tail decreasing dependence structures that are not necessarily extreme‐value copulas. Large‐scale Monte Carlo experiments are used to investigate the level and power of the various versions of the test and show that the proposed procedure can be substantially more powerful than tests of exchangeability derived directly from the empirical copula. The approach is illustrated on well‐known financial data.  相似文献   

8.
In this paper, we consider the problem of adaptive density or survival function estimation in an additive model defined by Z=X+Y with X independent of Y, when both random variables are non‐negative. This model is relevant, for instance, in reliability fields where we are interested in the failure time of a certain material that cannot be isolated from the system it belongs. Our goal is to recover the distribution of X (density or survival function) through n observations of Z, assuming that the distribution of Y is known. This issue can be seen as the classical statistical problem of deconvolution that has been tackled in many cases using Fourier‐type approaches. Nonetheless, in the present case, the random variables have the particularity to be supported. Knowing that, we propose a new angle of attack by building a projection estimator with an appropriate Laguerre basis. We present upper bounds on the mean squared integrated risk of our density and survival function estimators. We then describe a non‐parametric data‐driven strategy for selecting a relevant projection space. The procedures are illustrated with simulated data and compared with the performances of a more classical deconvolution setting using a Fourier approach. Our procedure achieves faster convergence rates than Fourier methods for estimating these functions.  相似文献   

9.
The authors propose a new monotone nonparametric estimate for a regression function of two or more variables. Their method consists in applying successively one‐dimensional isotonization procedures on an initial, unconstrained nonparametric regression estimate. In the case of a strictly monotone regression function, they show that the new estimate and the initial one are first‐order asymptotic equivalent; they also establish asymptotic normality of an appropriate standardization of the new estimate. In addition, they show that if the regression function is not monotone in one of its arguments, the new estimate and the initial one have approximately the same Lp‐norm. They illustrate their approach by means of a simulation study, and two data examples are analyzed.  相似文献   

10.
Skew‐symmetric families of distributions such as the skew‐normal and skew‐t represent supersets of the normal and t distributions, and they exhibit richer classes of extremal behaviour. By defining a non‐stationary skew‐normal process, which allows the easy handling of positive definite, non‐stationary covariance functions, we derive a new family of max‐stable processes – the extremal skew‐t process. This process is a superset of non‐stationary processes that include the stationary extremal‐t processes. We provide the spectral representation and the resulting angular densities of the extremal skew‐t process and illustrate its practical implementation.  相似文献   

11.
In this paper, we consider non‐parametric copula inference under bivariate censoring. Based on an estimator of the joint cumulative distribution function, we define a discrete and two smooth estimators of the copula. The construction that we propose is valid for a large range of estimators of the distribution function and therefore for a large range of bivariate censoring frameworks. Under some conditions on the tails of the distributions, the weak convergence of the corresponding copula processes is obtained in l([0,1]2). We derive the uniform convergence rates of the copula density estimators deduced from our smooth copula estimators. Investigation of the practical behaviour of these estimators is performed through a simulation study and two real data applications, corresponding to different censoring settings. We use our non‐parametric estimators to define a goodness‐of‐fit procedure for parametric copula models. A new bootstrap scheme is proposed to compute the critical values.  相似文献   

12.
Abstract. We propose a non‐linear density estimator, which is locally adaptive, like wavelet estimators, and positive everywhere, without a log‐ or root‐transform. This estimator is based on maximizing a non‐parametric log‐likelihood function regularized by a total variation penalty. The smoothness is driven by a single penalty parameter, and to avoid cross‐validation, we derive an information criterion based on the idea of universal penalty. The penalized log‐likelihood maximization is reformulated as an ?1‐penalized strictly convex programme whose unique solution is the density estimate. A Newton‐type method cannot be applied to calculate the estimate because the ?1‐penalty is non‐differentiable. Instead, we use a dual block coordinate relaxation method that exploits the problem structure. By comparing with kernel, spline and taut string estimators on a Monte Carlo simulation, and by investigating the sensitivity to ties on two real data sets, we observe that the new estimator achieves good L 1 and L 2 risk for densities with sharp features, and behaves well with ties.  相似文献   

13.
Birth-multiple catastrophe processes are analyzed where the birth transition rates are assumed to be constant while catastrophes are distinguished by having possibly different destinations and possibly different transition rates. The transient probability functions of such birth-multiple catastrophe systems are determined. The solution method uses dual processes, randomization, and sample path counting. Solutions are explicit in terms of being a finite linear combination of products of exponential functions of time, t, and nonnegative integer powers of t. The coefficients within this expansion follow a pattern of rational functions of the transition rates.  相似文献   

14.
On-line process control consists of inspecting a single item for every m (integer and m ≥ 2) produced items. Based on the results of the inspection, it is decided whether the process is in-control (the fraction of conforming items is p 1; State I) or out-of-control (the fraction of conforming items is p 2 < p 1; State II). If the inspected item is non conforming, it is determined that the process is out-of-control, and the production process is stopped for an adjustment; otherwise, production continues. As most designs of on-line process control assume a long-run production, this study can be viewed as an extension because it is concerned with short-run production and the decision regarding the process is subject to misclassification errors. The probabilistic model of the control system employs properties of an ergodic Markov chain to obtain the expression of the average cost of the system per unit produced, which can be minimised as a function of the sampling interval, m. The procedure is illustrated by a numerical example.  相似文献   

15.
The posterior predictive p value (ppp) was invented as a Bayesian counterpart to classical p values. The methodology can be applied to discrepancy measures involving both data and parameters and can, hence, be targeted to check for various modeling assumptions. The interpretation can, however, be difficult since the distribution of the ppp value under modeling assumptions varies substantially between cases. A calibration procedure has been suggested, treating the ppp value as a test statistic in a prior predictive test. In this paper, we suggest that a prior predictive test may instead be based on the expected posterior discrepancy, which is somewhat simpler, both conceptually and computationally. Since both these methods require the simulation of a large posterior parameter sample for each of an equally large prior predictive data sample, we furthermore suggest to look for ways to match the given discrepancy by a computation‐saving conflict measure. This approach is also based on simulations but only requires sampling from two different distributions representing two contrasting information sources about a model parameter. The conflict measure methodology is also more flexible in that it handles non‐informative priors without difficulty. We compare the different approaches theoretically in some simple models and in a more complex applied example.  相似文献   

16.
This paper contains an application of the asymptotic expansion of a pFp() function to a problem encountered in econometrics. In particular we consider an approximation of the distribution function of the limited information maximum likelihood (LIML) identifiability test statistic using the method of moments. An expression for the Sth order asymptotic approximation of the moments of the LIML identifiability test statistic is derived and tabulated. The exact distribution function of the test statistic is approximated by a member of the class of F (variance ratio) distribution functions having the same first two integer moments. Some tabulations of the approximating distribution function are included.  相似文献   

17.
A new generalized p-value method is proposed for testing the equality of coefficients of variation in k normal populations. Simulation studies show that the type I error probabilities are close to the nominal level. The proposed test is also compared with likelihood ratio test, modified Bennett's test and score test through Monte Carlo simulation, the results demonstrate that the generalized p-value method has satisfactory performance in terms of sizes and powers.  相似文献   

18.
ABSTRACT

Life tables used in life insurance are often calibrated to show the survival function of the age of death distribution at exact integer ages. Actuaries usually make fractional age assumptions (FAAs) when having to value payments that are not restricted to integer ages. Traditional FAAs have the advantage of simplicity but cannot guarantee to capture precisely the real trends of the survival functions and sometimes even result in a non intuitive overall shape of the force of mortality. In fact, an FAA is an interpolation between integer age values which are accepted as given. In this article, we introduce Kriging model, which is widely used as a metamodel for expensive simulations, to fit the survival function at integer ages, and furthermore use the precisely constructed survival function to build the force of mortality and the life expectancy. The experimental results obtained from a simulated life table (Makehamized life table) and two “real” life tables (the Chinese and US life tables) show that these actuarial quantities (survival function, force of mortality, and life expectancy) presented by Kriging model are much more accurate than those presented by commonly-used FAAs: the uniform distribution of death (UDD) assumption, the constant force assumption, and the Balducci assumption.  相似文献   

19.
We consider wavelet-based non linear estimators, which are constructed by using the thresholding of the empirical wavelet coefficients, for the mean regression functions with strong mixing errors and investigate their asymptotic rates of convergence. We show that these estimators achieve nearly optimal convergence rates within a logarithmic term over a large range of Besov function classes Bsp, q. The theory is illustrated with some numerical examples.

A new ingredient in our development is a Bernstein-type exponential inequality, for a sequence of random variables with certain mixing structure and are not necessarily bounded or sub-Gaussian. This moderate deviation inequality may be of independent interest.  相似文献   


20.
We are concerned with a situation in which we would like to test multiple hypotheses with tests whose p‐values cannot be computed explicitly but can be approximated using Monte Carlo simulation. This scenario occurs widely in practice. We are interested in obtaining the same rejections and non‐rejections as the ones obtained if the p‐values for all hypotheses had been available. The present article introduces a framework for this scenario by providing a generic algorithm for a general multiple testing procedure. We establish conditions that guarantee that the rejections and non‐rejections obtained through Monte Carlo simulations are identical to the ones obtained with the p‐values. Our framework is applicable to a general class of step‐up and step‐down procedures, which includes many established multiple testing corrections such as the ones of Bonferroni, Holm, Sidak, Hochberg or Benjamini–Hochberg. Moreover, we show how to use our framework to improve algorithms available in the literature in such a way as to yield theoretical guarantees on their results. These modifications can easily be implemented in practice and lead to a particular way of reporting multiple testing results as three sets together with an error bound on their correctness, demonstrated exemplarily using a real biological dataset.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号