首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract. We study the Jeffreys prior and its properties for the shape parameter of univariate skew‐t distributions with linear and nonlinear Student's t skewing functions. In both cases, we show that the resulting priors for the shape parameter are symmetric around zero and proper. Moreover, we propose a Student's t approximation of the Jeffreys prior that makes an objective Bayesian analysis easy to perform. We carry out a Monte Carlo simulation study that demonstrates an overall better behaviour of the maximum a posteriori estimator compared with the maximum likelihood estimator. We also compare the frequentist coverage of the credible intervals based on the Jeffreys prior and its approximation and show that they are similar. We further discuss location‐scale models under scale mixtures of skew‐normal distributions and show some conditions for the existence of the posterior distribution and its moments. Finally, we present three numerical examples to illustrate the implications of our results on inference for skew‐t distributions.  相似文献   

2.
In this paper we consider and propose some confidence intervals for estimating the mean or difference of means of skewed populations. We extend the median t interval to the two sample problem. Further, we suggest using the bootstrap to find the critical points for use in the calculation of median t intervals. A simulation study has been made to compare the performance of the intervals and a real life example has been considered to illustrate the application of the methods.  相似文献   

3.
This paper considers estimation of the function g in the model Yt = g(Xt ) + ?t when E(?t|Xt) ≠ 0 with nonzero probability. We assume the existence of an instrumental variable Zt that is independent of ?t, and of an innovation ηt = XtE(Xt|Zt). We use a nonparametric regression of Xt on Zt to obtain residuals ηt, which in turn are used to obtain a consistent estimator of g. The estimator was first analyzed by Newey, Powell & Vella (1999) under the assumption that the observations are independent and identically distributed. Here we derive a sample mean‐squared‐error convergence result for independent identically distributed observations as well as a uniform‐convergence result under time‐series dependence.  相似文献   

4.
Skew‐symmetric families of distributions such as the skew‐normal and skew‐t represent supersets of the normal and t distributions, and they exhibit richer classes of extremal behaviour. By defining a non‐stationary skew‐normal process, which allows the easy handling of positive definite, non‐stationary covariance functions, we derive a new family of max‐stable processes – the extremal skew‐t process. This process is a superset of non‐stationary processes that include the stationary extremal‐t processes. We provide the spectral representation and the resulting angular densities of the extremal skew‐t process and illustrate its practical implementation.  相似文献   

5.
In mixed models the mean square error (MSE) of empirical best linear unbiased estimators generally cannot be written in closed form. Unlike traditional methods of inference, parametric bootstrapping does not require approximation of this MSE or the test statistic distribution. Data were simulated to compare coverage rates for intervals based on the naïve MSE approximation and the method of Kenward and Roger, and parametric bootstrap intervals (Efron's percentile, Hall's percentile, bootstrap-t). The Kenward–Roger method performed best and the bootstrap-t almost as well. Intervals were also compared for a small set of real data. Implications for minimum sample size are discussed.  相似文献   

6.
The authors give easy‐to‐check sufficient conditions for the geometric ergodicity and the finiteness of the moments of a random process xt = ?(xt‐1,…, xt‐p) + ?tσ(xt‐1,…, xt‐q) in which ?: Rp → R, σ Rq → R and (?t) is a sequence of independent and identically distributed random variables. They deduce strong mixing properties for this class of nonlinear autoregressive models with changing conditional variances which includes, among others, the ARCH(p), the AR(p)‐ARCH(p), and the double‐threshold autoregressive models.  相似文献   

7.
In this paper we consider confidence intervals for the ratio of two population variances. We propose a confidence interval for the ratio of two variances based on the t-statistic by deriving its Edgeworth expansion and considering Hall's and Johnson's transformations. Then, we consider the coverage accuracy of suggested intervals and intervals based on the F-statistic for some distributions.  相似文献   

8.
The paper considers joint maximum likelihood (ML) and semiparametric (SP) estimation of copula parameters in a bivariate t-copula. Analytical expressions for the asymptotic covariance matrix involving integrals over special functions are derived, which can be evaluated numerically. These direct evaluations of the Fisher information matrix are compared to Hessian evaluations based on numerical differentiation in a simulation study showing a satisfactory performance of the computationally less demanding Hessian evaluations. Individual asymptotic confidence intervals for the t-copula parameters and the corresponding tail dependence coefficient are derived. For two financial datasets these confidence intervals are calculated using both direct evaluation of the Fisher information and numerical evaluation of the Hessian matrix. These confidence intervals are compared to parametric and nonparametric BCA bootstrap intervals based on ML and SP estimation, respectively, showing a preference for asymptotic confidence intervals based on numerical Hessian evaluations.  相似文献   

9.
We propose a new model for regression and dependence analysis when addressing spatial data with possibly heavy tails and an asymmetric marginal distribution. We first propose a stationary process with t marginals obtained through scale mixing of a Gaussian process with an inverse square root process with Gamma marginals. We then generalize this construction by considering a skew‐Gaussian process, thus obtaining a process with skew‐t marginal distributions. For the proposed (skew) t process, we study the second‐order and geometrical properties and in the t case, we provide analytic expressions for the bivariate distribution. In an extensive simulation study, we investigate the use of the weighted pairwise likelihood as a method of estimation for the t process. Moreover we compare the performance of the optimal linear predictor of the t process versus the optimal Gaussian predictor. Finally, the effectiveness of our methodology is illustrated by analyzing a georeferenced dataset on maximum temperatures in Australia.  相似文献   

10.
Abstract. For probability distributions on ? q, a detailed study of the breakdown properties of some multivariate M‐functionals related to Tyler's [Ann. Statist. 15 (1987) 234] ‘distribution‐free’ M‐functional of scatter is given. These include a symmetrized version of Tyler's M‐functional of scatter, and the multivariate t M‐functionals of location and scatter. It is shown that for ‘smooth’ distributions, the (contamination) breakdown point of Tyler's M‐functional of scatter and of its symmetrized version are 1/q and , respectively. For the multivariate t M‐functional which arises from the maximum likelihood estimate for the parameters of an elliptical t distribution on ν ≥ 1 degrees of freedom the breakdown point at smooth distributions is 1/( q + ν). Breakdown points are also obtained for general distributions, including empirical distributions. Finally, the sources of breakdown are investigated. It turns out that breakdown can only be caused by contaminating distributions that are concentrated near low‐dimensional subspaces.  相似文献   

11.
In recent literature, the truncated normal distribution has been used to model the stochastic structure for a variety of random structures. In this paper, the sensitivity of the t-random variable under a left-truncated normal population is explored. Simulation results are used to assess the errors associated when applying the student t-distribution to the case of an underlying left-truncated normal population. The maximum errors are modelled as a linear function of the magnitude of the truncation and sample size. In the case of a left-truncated normal population, adjustments to standard inferences for the mean, namely confidence intervals and observed significance levels, based on the t-random variable are introduced.  相似文献   

12.
Several methods are available for generating confidence intervals for rate difference, rate ratio, or odds ratio, when comparing two independent binomial proportions or Poisson (exposure‐adjusted) incidence rates. Most methods have some degree of systematic bias in one‐sided coverage, so that a nominal 95% two‐sided interval cannot be assumed to have tail probabilities of 2.5% at each end, and any associated hypothesis test is at risk of inflated type I error rate. Skewness‐corrected asymptotic score methods have been shown to have superior equal‐tailed coverage properties for the binomial case. This paper completes this class of methods by introducing novel skewness corrections for the Poisson case and for odds ratio, with and without stratification. Graphical methods are used to compare the performance of these intervals against selected alternatives. The skewness‐corrected methods perform favourably in all situations—including those with small sample sizes or rare events—and the skewness correction should be considered essential for analysis of rate ratios. The stratified method is found to have excellent coverage properties for a fixed effects analysis. In addition, another new stratified score method is proposed, based on the t‐distribution, which is suitable for use in either a fixed effects or random effects analysis. By using a novel weighting scheme, this approach improves on conventional and modern meta‐analysis methods with weights that rely on crude estimation of stratum variances. In summary, this paper describes methods that are found to be robust for a wide range of applications in the analysis of rates.  相似文献   

13.
Abstract. In this article, we propose a new parametric family of models for real‐valued spatio‐temporal stochastic processes S ( x , t ) and show how low‐rank approximations can be used to overcome the computational problems that arise in fitting the proposed class of models to large datasets. Separable covariance models, in which the spatio‐temporal covariance function of S ( x , t ) factorizes into a product of purely spatial and purely temporal functions, are often used as a convenient working assumption but are too inflexible to cover the range of covariance structures encountered in applications. We define positive and negative non‐separability and show that in our proposed family we can capture positive, zero and negative non‐separability by varying the value of a single parameter.  相似文献   

14.
《随机性模型》2013,29(2):245-255
Consider a risk reserve process under which the reserve can generate interest. For constants a and b such that a<b, we study the occupation time T a,b (t), which is the total length of the time intervals up to time t during which the reserve is between a and b. We first present a general formula for piecewise deterministic Markov processes, which will be used for the computation of the Laplace transform of T a,b (t). Explicit results are then given for the special case that claim sizes are exponentially distributed. The classical model is discussed in detail.  相似文献   

15.
We consider three interval estimators for linear functions of Poisson rates: a Wald interval, a t interval with Satterthwaite's degrees of freedom, and a Bayes interval using noninformative priors. The differences in these intervals are illustrated using data from the Crash Records Bureau of the Texas Department of Public Safety. We then investigate the relative performance of these intervals via a simulation study. This study demonstrates that the Wald interval performs poorly when expected counts are less than 5, while the interval based on the noninformative prior performs best. It also shows that the Bayes interval and the interval based on the t distribution perform comparably well for more moderate expected counts.  相似文献   

16.
This paper investigates two “non-exact” t-type tests, t( k2) and t(k2), of the individual coefficients of a linear regression model, based on two ordinary ridge estimators. The reported results are built on a simulation study covering 84 different models. For models with large standard errors, the ridge-based t-tests have correct levels with considerable gain in powers over those of the least squares t-test, t(0). For models with small standard errors, t(k1) is found to be liberal and is not safe to use while, t(k2) is found to slightly exceed the nominal level in few cases. When tie two ridge tests art: not winners, the results indicate that they don't loose much against t(0).  相似文献   

17.
Euclidean distance k-nearest neighbor (k-NN) classifiers are simple nonparametric classification rules. Bootstrap methods, widely used for estimating the expected prediction error of classification rules, are motivated by the objective of calculating the ideal bootstrap estimate of expected prediction error. In practice, bootstrap methods use Monte Carlo resampling to estimate the ideal bootstrap estimate because exact calculation is generally intractable. In this article, we present analytical formulae for exact calculation of the ideal bootstrap estimate of expected prediction error for k-NN classifiers and propose a new weighted k-NN classifier based on resampling ideas. The resampling-weighted k-NN classifier replaces the k-NN posterior probability estimates by their expectations under resampling and predicts an unclassified covariate as belonging to the group with the largest resampling expectation. A simulation study and an application involving remotely sensed data show that the resampling-weighted k-NN classifier compares favorably to unweighted and distance-weighted k-NN classifiers.  相似文献   

18.
Various criteria have been proposed for determining the reliability of noncompartmental pharmacokinetic estimates of the terminal disposition phase half‐life (t1/2) and the extrapolated area under the curve (AUCextrap). This simulation study assessed the performance of two frequently used reportability rules: the terminal disposition phase regression adjusted‐r2 classification rule and the regression data point time span classification rule. Using simulated data, these rules were assessed in relation to the magnitude of the variability in the terminal disposition phase slope, the length of the terminal disposition phase captured in the concentration‐time profile (data span), the number of data points present in the terminal disposition phase, and the type and level of variability in concentration measurement. The accuracy of estimating t1/2 was satisfactory for data spans of 1.5 and longer, given low measurement variability; and for spans of 2.5 and longer, given high measurement variability. Satisfactory accuracy in estimating AUCextrap was only achieved with low measurement variability and spans of 2.5 and longer. Neither of the classification rules improved the identification of accurate t1/2 and AUCextrap estimates. Based on the findings of this study, a strategy is proposed for determining the reportability of estimates of t1/2 and area under the curve extrapolated to infinity.  相似文献   

19.
The authors examine the robustness of empirical likelihood ratio (ELR) confidence intervals for the mean and M‐estimate of location. They show that the ELR interval for the mean has an asymptotic breakdown point of zero. They also give a formula for computing the breakdown point of the ELR interval for M‐estimate. Through a numerical study, they further examine the relative advantages of the ELR interval to the commonly used confidence intervals based on the asymptotic distribution of the M‐estimate.  相似文献   

20.
We consider the classic problem of interval estimation of a proportion p based on binomial sampling. The ‘exact’ Clopper–Pearson confidence interval for p is known to be unnecessarily conservative. We propose coverage adjustments of the Clopper–Pearson interval that incorporate prior or posterior beliefs into the interval. Using heatmap‐type plots for comparing confidence intervals, we show that the coverage‐adjusted intervals have satisfying coverage and shorter expected lengths than competing intervals found in the literature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号